Test Report: QEMU_macOS 19476

                    
                      5d2be5ad06c5c8c1678cb56a2620c3837d13735d:2024-08-19:35852
                    
                

Test fail (97/274)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 11.66
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 10.11
46 TestCertOptions 10.1
47 TestCertExpiration 195.28
48 TestDockerFlags 10.13
49 TestForceSystemdFlag 10.54
50 TestForceSystemdEnv 10.21
95 TestFunctional/parallel/ServiceCmdConnect 37.18
167 TestMultiControlPlane/serial/StopSecondaryNode 214.12
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 103.57
169 TestMultiControlPlane/serial/RestartSecondaryNode 208.69
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 234.4
172 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
174 TestMultiControlPlane/serial/StopCluster 202.07
175 TestMultiControlPlane/serial/RestartCluster 5.25
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
177 TestMultiControlPlane/serial/AddSecondaryNode 0.07
181 TestImageBuild/serial/Setup 10.1
184 TestJSONOutput/start/Command 9.77
190 TestJSONOutput/pause/Command 0.08
196 TestJSONOutput/unpause/Command 0.04
213 TestMinikubeProfile 10.11
216 TestMountStart/serial/StartWithMountFirst 9.95
219 TestMultiNode/serial/FreshStart2Nodes 9.93
220 TestMultiNode/serial/DeployApp2Nodes 80.8
221 TestMultiNode/serial/PingHostFrom2Pods 0.09
222 TestMultiNode/serial/AddNode 0.07
223 TestMultiNode/serial/MultiNodeLabels 0.06
224 TestMultiNode/serial/ProfileList 0.08
225 TestMultiNode/serial/CopyFile 0.06
226 TestMultiNode/serial/StopNode 0.14
227 TestMultiNode/serial/StartAfterStop 51.89
228 TestMultiNode/serial/RestartKeepsNodes 9.22
229 TestMultiNode/serial/DeleteNode 0.1
230 TestMultiNode/serial/StopMultiNode 2.06
231 TestMultiNode/serial/RestartMultiNode 5.26
232 TestMultiNode/serial/ValidateNameConflict 20.21
236 TestPreload 10.11
238 TestScheduledStopUnix 10.06
239 TestSkaffold 12.58
242 TestRunningBinaryUpgrade 607.28
244 TestKubernetesUpgrade 18.63
257 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.36
258 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.15
260 TestStoppedBinaryUpgrade/Upgrade 588.51
262 TestPause/serial/Start 9.96
272 TestNoKubernetes/serial/StartWithK8s 9.93
273 TestNoKubernetes/serial/StartWithStopK8s 5.33
274 TestNoKubernetes/serial/Start 5.33
278 TestNoKubernetes/serial/StartNoArgs 5.34
280 TestNetworkPlugins/group/auto/Start 9.93
281 TestNetworkPlugins/group/flannel/Start 9.8
282 TestNetworkPlugins/group/enable-default-cni/Start 9.86
283 TestNetworkPlugins/group/bridge/Start 9.85
284 TestNetworkPlugins/group/kindnet/Start 9.84
285 TestNetworkPlugins/group/kubenet/Start 9.86
286 TestNetworkPlugins/group/custom-flannel/Start 9.85
287 TestNetworkPlugins/group/calico/Start 9.77
288 TestNetworkPlugins/group/false/Start 9.8
290 TestStartStop/group/old-k8s-version/serial/FirstStart 9.92
292 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
293 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
296 TestStartStop/group/old-k8s-version/serial/SecondStart 5.24
297 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
298 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
299 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
300 TestStartStop/group/old-k8s-version/serial/Pause 0.1
302 TestStartStop/group/no-preload/serial/FirstStart 9.97
303 TestStartStop/group/no-preload/serial/DeployApp 0.09
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
307 TestStartStop/group/no-preload/serial/SecondStart 5.4
309 TestStartStop/group/embed-certs/serial/FirstStart 10.02
310 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
311 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
312 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
313 TestStartStop/group/no-preload/serial/Pause 0.1
315 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 10.31
316 TestStartStop/group/embed-certs/serial/DeployApp 0.09
317 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
320 TestStartStop/group/embed-certs/serial/SecondStart 5.44
321 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
322 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
325 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.27
326 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
327 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
328 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
329 TestStartStop/group/embed-certs/serial/Pause 0.1
331 TestStartStop/group/newest-cni/serial/FirstStart 10.04
332 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
333 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
334 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
335 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
340 TestStartStop/group/newest-cni/serial/SecondStart 5.25
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
344 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (11.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-584000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-584000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (11.659563667s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0d292ff9-e616-4c8c-b54d-13313598581f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-584000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5925f93b-f42a-4e9e-a127-1e4f45a182a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19476"}}
	{"specversion":"1.0","id":"96f644b3-a121-492d-be4d-02d73441fe4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig"}}
	{"specversion":"1.0","id":"62dc5a73-009a-4762-82d7-fc40e38c83fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"da75b27b-0abf-4e30-b0ff-db7f0ce2269e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d706301f-2f70-41cb-8689-4b019607d675","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube"}}
	{"specversion":"1.0","id":"8519dbb7-aa08-4963-92a5-f5c5993b34b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"de3500cf-61d3-41ac-be68-77d394dfcde7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"1db50c9f-9f83-4e29-a7de-2df734696eb0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"3f057877-e303-4abe-ab95-7e9606984082","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"0e257d77-3237-4427-9b26-e2ac94b000e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-584000\" primary control-plane node in \"download-only-584000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"4629b9df-581a-472f-b0f1-ac60dda0c4ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"88c37863-600f-468c-bb34-d95ee146d45a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19476-967/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1070af960 0x1070af960 0x1070af960 0x1070af960 0x1070af960 0x1070af960 0x1070af960] Decompressors:map[bz2:0x1400080f5b0 gz:0x1400080f5b8 tar:0x1400080f560 tar.bz2:0x1400080f570 tar.gz:0x1400080f580 tar.xz:0x1400080f590 tar.zst:0x1400080f5a0 tbz2:0x1400080f570 tgz:0x140
0080f580 txz:0x1400080f590 tzst:0x1400080f5a0 xz:0x1400080f5c0 zip:0x1400080f5d0 zst:0x1400080f5c8] Getters:map[file:0x140017548a0 http:0x1400083a280 https:0x1400083a370] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"d6214f57-efca-4320-bace-eb024145e920","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 03:34:51.858962    1436 out.go:345] Setting OutFile to fd 1 ...
	I0819 03:34:51.859098    1436 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 03:34:51.859102    1436 out.go:358] Setting ErrFile to fd 2...
	I0819 03:34:51.859104    1436 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 03:34:51.859218    1436 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	W0819 03:34:51.859313    1436 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19476-967/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19476-967/.minikube/config/config.json: no such file or directory
	I0819 03:34:51.860525    1436 out.go:352] Setting JSON to true
	I0819 03:34:51.877683    1436 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":254,"bootTime":1724063437,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0819 03:34:51.877761    1436 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 03:34:51.882433    1436 out.go:97] [download-only-584000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 03:34:51.882559    1436 notify.go:220] Checking for updates...
	W0819 03:34:51.882595    1436 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball: no such file or directory
	I0819 03:34:51.886441    1436 out.go:169] MINIKUBE_LOCATION=19476
	I0819 03:34:51.889439    1436 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	I0819 03:34:51.894480    1436 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 03:34:51.898499    1436 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 03:34:51.901430    1436 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	W0819 03:34:51.907438    1436 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0819 03:34:51.907668    1436 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 03:34:51.912320    1436 out.go:97] Using the qemu2 driver based on user configuration
	I0819 03:34:51.912338    1436 start.go:297] selected driver: qemu2
	I0819 03:34:51.912341    1436 start.go:901] validating driver "qemu2" against <nil>
	I0819 03:34:51.912417    1436 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 03:34:51.915504    1436 out.go:169] Automatically selected the socket_vmnet network
	I0819 03:34:51.921158    1436 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0819 03:34:51.921255    1436 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 03:34:51.921299    1436 cni.go:84] Creating CNI manager for ""
	I0819 03:34:51.921316    1436 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0819 03:34:51.921369    1436 start.go:340] cluster config:
	{Name:download-only-584000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-584000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 03:34:51.926567    1436 iso.go:125] acquiring lock: {Name:mk9bbf20f477d4c64990a7e4e7281f35cf7cfcc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 03:34:51.931476    1436 out.go:97] Downloading VM boot image ...
	I0819 03:34:51.931508    1436 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19476-967/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso
	I0819 03:34:56.631368    1436 out.go:97] Starting "download-only-584000" primary control-plane node in "download-only-584000" cluster
	I0819 03:34:56.631392    1436 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0819 03:34:56.694181    1436 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0819 03:34:56.694203    1436 cache.go:56] Caching tarball of preloaded images
	I0819 03:34:56.694384    1436 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0819 03:34:56.699535    1436 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0819 03:34:56.699545    1436 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0819 03:34:56.786722    1436 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0819 03:35:02.397869    1436 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0819 03:35:02.398330    1436 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0819 03:35:03.102925    1436 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0819 03:35:03.103120    1436 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/download-only-584000/config.json ...
	I0819 03:35:03.103135    1436 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/download-only-584000/config.json: {Name:mk98fb5cfaef9e8b199d72380c0c2b4f1741ce36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 03:35:03.103308    1436 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0819 03:35:03.103488    1436 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19476-967/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0819 03:35:03.440938    1436 out.go:193] 
	W0819 03:35:03.448950    1436 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19476-967/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1070af960 0x1070af960 0x1070af960 0x1070af960 0x1070af960 0x1070af960 0x1070af960] Decompressors:map[bz2:0x1400080f5b0 gz:0x1400080f5b8 tar:0x1400080f560 tar.bz2:0x1400080f570 tar.gz:0x1400080f580 tar.xz:0x1400080f590 tar.zst:0x1400080f5a0 tbz2:0x1400080f570 tgz:0x1400080f580 txz:0x1400080f590 tzst:0x1400080f5a0 xz:0x1400080f5c0 zip:0x1400080f5d0 zst:0x1400080f5c8] Getters:map[file:0x140017548a0 http:0x1400083a280 https:0x1400083a370] Dir:false ProgressListe
ner:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0819 03:35:03.448975    1436 out_reason.go:110] 
	W0819 03:35:03.456887    1436 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 03:35:03.459900    1436 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-584000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (11.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19476-967/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19476-967/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.11s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-824000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-824000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.961148333s)

                                                
                                                
-- stdout --
	* [offline-docker-824000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-824000" primary control-plane node in "offline-docker-824000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-824000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:12:00.585523    3662 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:12:00.585661    3662 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:12:00.585664    3662 out.go:358] Setting ErrFile to fd 2...
	I0819 04:12:00.585667    3662 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:12:00.585800    3662 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:12:00.586807    3662 out.go:352] Setting JSON to false
	I0819 04:12:00.604362    3662 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2483,"bootTime":1724063437,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0819 04:12:00.604445    3662 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:12:00.609888    3662 out.go:177] * [offline-docker-824000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:12:00.617850    3662 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 04:12:00.617852    3662 notify.go:220] Checking for updates...
	I0819 04:12:00.624752    3662 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	I0819 04:12:00.627800    3662 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:12:00.630810    3662 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:12:00.633705    3662 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	I0819 04:12:00.636780    3662 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:12:00.640268    3662 config.go:182] Loaded profile config "multinode-837000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:12:00.640322    3662 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:12:00.644708    3662 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 04:12:00.651831    3662 start.go:297] selected driver: qemu2
	I0819 04:12:00.651849    3662 start.go:901] validating driver "qemu2" against <nil>
	I0819 04:12:00.651858    3662 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:12:00.653794    3662 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 04:12:00.656759    3662 out.go:177] * Automatically selected the socket_vmnet network
	I0819 04:12:00.659859    3662 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 04:12:00.659892    3662 cni.go:84] Creating CNI manager for ""
	I0819 04:12:00.659898    3662 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:12:00.659906    3662 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 04:12:00.659949    3662 start.go:340] cluster config:
	{Name:offline-docker-824000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-824000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:12:00.663408    3662 iso.go:125] acquiring lock: {Name:mk9bbf20f477d4c64990a7e4e7281f35cf7cfcc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:12:00.670753    3662 out.go:177] * Starting "offline-docker-824000" primary control-plane node in "offline-docker-824000" cluster
	I0819 04:12:00.674809    3662 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:12:00.674839    3662 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:12:00.674850    3662 cache.go:56] Caching tarball of preloaded images
	I0819 04:12:00.674925    3662 preload.go:172] Found /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:12:00.674930    3662 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:12:00.674997    3662 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/offline-docker-824000/config.json ...
	I0819 04:12:00.675007    3662 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/offline-docker-824000/config.json: {Name:mkc277ccc4940ceddf00687a1a72d1452f937d8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:12:00.675322    3662 start.go:360] acquireMachinesLock for offline-docker-824000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:12:00.675359    3662 start.go:364] duration metric: took 29.041µs to acquireMachinesLock for "offline-docker-824000"
	I0819 04:12:00.675371    3662 start.go:93] Provisioning new machine with config: &{Name:offline-docker-824000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-824000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:12:00.675397    3662 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:12:00.683734    3662 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0819 04:12:00.699891    3662 start.go:159] libmachine.API.Create for "offline-docker-824000" (driver="qemu2")
	I0819 04:12:00.699929    3662 client.go:168] LocalClient.Create starting
	I0819 04:12:00.700019    3662 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem
	I0819 04:12:00.700053    3662 main.go:141] libmachine: Decoding PEM data...
	I0819 04:12:00.700062    3662 main.go:141] libmachine: Parsing certificate...
	I0819 04:12:00.700109    3662 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/cert.pem
	I0819 04:12:00.700131    3662 main.go:141] libmachine: Decoding PEM data...
	I0819 04:12:00.700139    3662 main.go:141] libmachine: Parsing certificate...
	I0819 04:12:00.700505    3662 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19476-967/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19476-967/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:12:00.853518    3662 main.go:141] libmachine: Creating SSH key...
	I0819 04:12:01.106843    3662 main.go:141] libmachine: Creating Disk image...
	I0819 04:12:01.106852    3662 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:12:01.107043    3662 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19476-967/.minikube/machines/offline-docker-824000/disk.qcow2.raw /Users/jenkins/minikube-integration/19476-967/.minikube/machines/offline-docker-824000/disk.qcow2
	I0819 04:12:01.116550    3662 main.go:141] libmachine: STDOUT: 
	I0819 04:12:01.116572    3662 main.go:141] libmachine: STDERR: 
	I0819 04:12:01.116634    3662 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/offline-docker-824000/disk.qcow2 +20000M
	I0819 04:12:01.125582    3662 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:12:01.125604    3662 main.go:141] libmachine: STDERR: 
	I0819 04:12:01.125620    3662 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19476-967/.minikube/machines/offline-docker-824000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19476-967/.minikube/machines/offline-docker-824000/disk.qcow2
	I0819 04:12:01.125628    3662 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:12:01.125644    3662 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:12:01.125668    3662 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/offline-docker-824000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/offline-docker-824000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/offline-docker-824000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:ac:a7:14:a1:e4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/offline-docker-824000/disk.qcow2
	I0819 04:12:01.127477    3662 main.go:141] libmachine: STDOUT: 
	I0819 04:12:01.127497    3662 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:12:01.127516    3662 client.go:171] duration metric: took 427.573ms to LocalClient.Create
	I0819 04:12:03.129559    3662 start.go:128] duration metric: took 2.454187875s to createHost
	I0819 04:12:03.129584    3662 start.go:83] releasing machines lock for "offline-docker-824000", held for 2.454254916s
	W0819 04:12:03.129597    3662 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:12:03.146678    3662 out.go:177] * Deleting "offline-docker-824000" in qemu2 ...
	W0819 04:12:03.159269    3662 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:12:03.159278    3662 start.go:729] Will try again in 5 seconds ...
	I0819 04:12:08.161445    3662 start.go:360] acquireMachinesLock for offline-docker-824000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:12:08.161878    3662 start.go:364] duration metric: took 304.875µs to acquireMachinesLock for "offline-docker-824000"
	I0819 04:12:08.161996    3662 start.go:93] Provisioning new machine with config: &{Name:offline-docker-824000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-824000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:12:08.162236    3662 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:12:08.170524    3662 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0819 04:12:08.215077    3662 start.go:159] libmachine.API.Create for "offline-docker-824000" (driver="qemu2")
	I0819 04:12:08.215156    3662 client.go:168] LocalClient.Create starting
	I0819 04:12:08.215326    3662 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem
	I0819 04:12:08.215390    3662 main.go:141] libmachine: Decoding PEM data...
	I0819 04:12:08.215409    3662 main.go:141] libmachine: Parsing certificate...
	I0819 04:12:08.215498    3662 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/cert.pem
	I0819 04:12:08.215546    3662 main.go:141] libmachine: Decoding PEM data...
	I0819 04:12:08.215560    3662 main.go:141] libmachine: Parsing certificate...
	I0819 04:12:08.216098    3662 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19476-967/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19476-967/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:12:08.373017    3662 main.go:141] libmachine: Creating SSH key...
	I0819 04:12:08.447138    3662 main.go:141] libmachine: Creating Disk image...
	I0819 04:12:08.447144    3662 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:12:08.447361    3662 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19476-967/.minikube/machines/offline-docker-824000/disk.qcow2.raw /Users/jenkins/minikube-integration/19476-967/.minikube/machines/offline-docker-824000/disk.qcow2
	I0819 04:12:08.456487    3662 main.go:141] libmachine: STDOUT: 
	I0819 04:12:08.456512    3662 main.go:141] libmachine: STDERR: 
	I0819 04:12:08.456566    3662 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/offline-docker-824000/disk.qcow2 +20000M
	I0819 04:12:08.464346    3662 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:12:08.464368    3662 main.go:141] libmachine: STDERR: 
	I0819 04:12:08.464381    3662 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19476-967/.minikube/machines/offline-docker-824000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19476-967/.minikube/machines/offline-docker-824000/disk.qcow2
	I0819 04:12:08.464385    3662 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:12:08.464394    3662 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:12:08.464432    3662 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/offline-docker-824000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/offline-docker-824000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/offline-docker-824000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:21:57:31:6f:2c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/offline-docker-824000/disk.qcow2
	I0819 04:12:08.465974    3662 main.go:141] libmachine: STDOUT: 
	I0819 04:12:08.465996    3662 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:12:08.466009    3662 client.go:171] duration metric: took 250.8385ms to LocalClient.Create
	I0819 04:12:10.468180    3662 start.go:128] duration metric: took 2.305941s to createHost
	I0819 04:12:10.468262    3662 start.go:83] releasing machines lock for "offline-docker-824000", held for 2.306396542s
	W0819 04:12:10.468599    3662 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-824000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-824000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:12:10.483345    3662 out.go:201] 
	W0819 04:12:10.488386    3662 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:12:10.488452    3662 out.go:270] * 
	* 
	W0819 04:12:10.491448    3662 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:12:10.502164    3662 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-824000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-08-19 04:12:10.518688 -0700 PDT m=+2238.739027709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-824000 -n offline-docker-824000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-824000 -n offline-docker-824000: exit status 7 (68.0565ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-824000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-824000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-824000
--- FAIL: TestOffline (10.11s)

                                                
                                    
x
+
TestCertOptions (10.1s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-148000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-148000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.841282667s)

                                                
                                                
-- stdout --
	* [cert-options-148000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-148000" primary control-plane node in "cert-options-148000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-148000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-148000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-148000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-148000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-148000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (81.94ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-148000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-148000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-148000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-148000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-148000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-148000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (41.532125ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-148000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-148000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-148000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-148000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-148000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-08-19 04:12:41.008899 -0700 PDT m=+2269.229660126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-148000 -n cert-options-148000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-148000 -n cert-options-148000: exit status 7 (29.6065ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-148000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-148000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-148000
--- FAIL: TestCertOptions (10.10s)

                                                
                                    
x
+
TestCertExpiration (195.28s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-371000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-371000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.981396917s)

                                                
                                                
-- stdout --
	* [cert-expiration-371000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-371000" primary control-plane node in "cert-expiration-371000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-371000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-371000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-371000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-371000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-371000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.186223583s)

                                                
                                                
-- stdout --
	* [cert-expiration-371000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-371000" primary control-plane node in "cert-expiration-371000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-371000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-371000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-371000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-371000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-371000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-371000" primary control-plane node in "cert-expiration-371000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-371000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-371000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-371000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-08-19 04:15:40.986426 -0700 PDT m=+2449.205583209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-371000 -n cert-expiration-371000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-371000 -n cert-expiration-371000: exit status 7 (31.970583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-371000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-371000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-371000
--- FAIL: TestCertExpiration (195.28s)

                                                
                                    
x
+
TestDockerFlags (10.13s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-366000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-366000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.895626125s)

                                                
                                                
-- stdout --
	* [docker-flags-366000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-366000" primary control-plane node in "docker-flags-366000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-366000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:12:20.908846    3850 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:12:20.908979    3850 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:12:20.908982    3850 out.go:358] Setting ErrFile to fd 2...
	I0819 04:12:20.908985    3850 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:12:20.909120    3850 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:12:20.910145    3850 out.go:352] Setting JSON to false
	I0819 04:12:20.926283    3850 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2503,"bootTime":1724063437,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0819 04:12:20.926350    3850 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:12:20.937459    3850 out.go:177] * [docker-flags-366000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:12:20.941483    3850 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 04:12:20.941524    3850 notify.go:220] Checking for updates...
	I0819 04:12:20.948400    3850 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	I0819 04:12:20.951456    3850 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:12:20.952964    3850 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:12:20.956414    3850 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	I0819 04:12:20.959449    3850 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:12:20.962864    3850 config.go:182] Loaded profile config "force-systemd-flag-955000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:12:20.962929    3850 config.go:182] Loaded profile config "multinode-837000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:12:20.962975    3850 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:12:20.967403    3850 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 04:12:20.974490    3850 start.go:297] selected driver: qemu2
	I0819 04:12:20.974497    3850 start.go:901] validating driver "qemu2" against <nil>
	I0819 04:12:20.974503    3850 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:12:20.976741    3850 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 04:12:20.979475    3850 out.go:177] * Automatically selected the socket_vmnet network
	I0819 04:12:20.982521    3850 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0819 04:12:20.982582    3850 cni.go:84] Creating CNI manager for ""
	I0819 04:12:20.982590    3850 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:12:20.982594    3850 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 04:12:20.982638    3850 start.go:340] cluster config:
	{Name:docker-flags-366000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-366000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:12:20.986216    3850 iso.go:125] acquiring lock: {Name:mk9bbf20f477d4c64990a7e4e7281f35cf7cfcc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:12:20.994427    3850 out.go:177] * Starting "docker-flags-366000" primary control-plane node in "docker-flags-366000" cluster
	I0819 04:12:20.998398    3850 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:12:20.998415    3850 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:12:20.998427    3850 cache.go:56] Caching tarball of preloaded images
	I0819 04:12:20.998488    3850 preload.go:172] Found /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:12:20.998495    3850 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:12:20.998587    3850 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/docker-flags-366000/config.json ...
	I0819 04:12:20.998604    3850 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/docker-flags-366000/config.json: {Name:mk145e033fc0718c182f4de3ce4958c801d3a5ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:12:20.999238    3850 start.go:360] acquireMachinesLock for docker-flags-366000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:12:20.999274    3850 start.go:364] duration metric: took 29.791µs to acquireMachinesLock for "docker-flags-366000"
	I0819 04:12:20.999288    3850 start.go:93] Provisioning new machine with config: &{Name:docker-flags-366000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-366000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:12:20.999316    3850 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:12:21.008441    3850 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0819 04:12:21.026803    3850 start.go:159] libmachine.API.Create for "docker-flags-366000" (driver="qemu2")
	I0819 04:12:21.026836    3850 client.go:168] LocalClient.Create starting
	I0819 04:12:21.026907    3850 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem
	I0819 04:12:21.026946    3850 main.go:141] libmachine: Decoding PEM data...
	I0819 04:12:21.026955    3850 main.go:141] libmachine: Parsing certificate...
	I0819 04:12:21.026994    3850 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/cert.pem
	I0819 04:12:21.027021    3850 main.go:141] libmachine: Decoding PEM data...
	I0819 04:12:21.027029    3850 main.go:141] libmachine: Parsing certificate...
	I0819 04:12:21.027574    3850 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19476-967/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19476-967/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:12:21.178734    3850 main.go:141] libmachine: Creating SSH key...
	I0819 04:12:21.244034    3850 main.go:141] libmachine: Creating Disk image...
	I0819 04:12:21.244043    3850 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:12:21.244230    3850 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19476-967/.minikube/machines/docker-flags-366000/disk.qcow2.raw /Users/jenkins/minikube-integration/19476-967/.minikube/machines/docker-flags-366000/disk.qcow2
	I0819 04:12:21.253556    3850 main.go:141] libmachine: STDOUT: 
	I0819 04:12:21.253573    3850 main.go:141] libmachine: STDERR: 
	I0819 04:12:21.253621    3850 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/docker-flags-366000/disk.qcow2 +20000M
	I0819 04:12:21.261486    3850 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:12:21.261501    3850 main.go:141] libmachine: STDERR: 
	I0819 04:12:21.261514    3850 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19476-967/.minikube/machines/docker-flags-366000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19476-967/.minikube/machines/docker-flags-366000/disk.qcow2
	I0819 04:12:21.261518    3850 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:12:21.261531    3850 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:12:21.261556    3850 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/docker-flags-366000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/docker-flags-366000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/docker-flags-366000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:1f:62:df:22:00 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/docker-flags-366000/disk.qcow2
	I0819 04:12:21.263128    3850 main.go:141] libmachine: STDOUT: 
	I0819 04:12:21.263142    3850 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:12:21.263160    3850 client.go:171] duration metric: took 236.321375ms to LocalClient.Create
	I0819 04:12:23.265304    3850 start.go:128] duration metric: took 2.266001417s to createHost
	I0819 04:12:23.265368    3850 start.go:83] releasing machines lock for "docker-flags-366000", held for 2.266114917s
	W0819 04:12:23.265480    3850 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:12:23.292862    3850 out.go:177] * Deleting "docker-flags-366000" in qemu2 ...
	W0819 04:12:23.318039    3850 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:12:23.318058    3850 start.go:729] Will try again in 5 seconds ...
	I0819 04:12:28.320222    3850 start.go:360] acquireMachinesLock for docker-flags-366000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:12:28.320833    3850 start.go:364] duration metric: took 479.458µs to acquireMachinesLock for "docker-flags-366000"
	I0819 04:12:28.320981    3850 start.go:93] Provisioning new machine with config: &{Name:docker-flags-366000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-366000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:12:28.321246    3850 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:12:28.329763    3850 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0819 04:12:28.380074    3850 start.go:159] libmachine.API.Create for "docker-flags-366000" (driver="qemu2")
	I0819 04:12:28.380124    3850 client.go:168] LocalClient.Create starting
	I0819 04:12:28.380229    3850 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem
	I0819 04:12:28.380283    3850 main.go:141] libmachine: Decoding PEM data...
	I0819 04:12:28.380301    3850 main.go:141] libmachine: Parsing certificate...
	I0819 04:12:28.380366    3850 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/cert.pem
	I0819 04:12:28.380409    3850 main.go:141] libmachine: Decoding PEM data...
	I0819 04:12:28.380420    3850 main.go:141] libmachine: Parsing certificate...
	I0819 04:12:28.381267    3850 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19476-967/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19476-967/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:12:28.567741    3850 main.go:141] libmachine: Creating SSH key...
	I0819 04:12:28.707933    3850 main.go:141] libmachine: Creating Disk image...
	I0819 04:12:28.707939    3850 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:12:28.708117    3850 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19476-967/.minikube/machines/docker-flags-366000/disk.qcow2.raw /Users/jenkins/minikube-integration/19476-967/.minikube/machines/docker-flags-366000/disk.qcow2
	I0819 04:12:28.717646    3850 main.go:141] libmachine: STDOUT: 
	I0819 04:12:28.717733    3850 main.go:141] libmachine: STDERR: 
	I0819 04:12:28.717785    3850 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/docker-flags-366000/disk.qcow2 +20000M
	I0819 04:12:28.725556    3850 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:12:28.725573    3850 main.go:141] libmachine: STDERR: 
	I0819 04:12:28.725587    3850 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19476-967/.minikube/machines/docker-flags-366000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19476-967/.minikube/machines/docker-flags-366000/disk.qcow2
	I0819 04:12:28.725591    3850 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:12:28.725603    3850 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:12:28.725639    3850 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/docker-flags-366000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/docker-flags-366000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/docker-flags-366000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:32:dd:f8:4d:f4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/docker-flags-366000/disk.qcow2
	I0819 04:12:28.727238    3850 main.go:141] libmachine: STDOUT: 
	I0819 04:12:28.727275    3850 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:12:28.727287    3850 client.go:171] duration metric: took 347.162125ms to LocalClient.Create
	I0819 04:12:30.729431    3850 start.go:128] duration metric: took 2.408188208s to createHost
	I0819 04:12:30.729490    3850 start.go:83] releasing machines lock for "docker-flags-366000", held for 2.408657333s
	W0819 04:12:30.729821    3850 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-366000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-366000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:12:30.740498    3850 out.go:201] 
	W0819 04:12:30.744620    3850 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:12:30.744644    3850 out.go:270] * 
	* 
	W0819 04:12:30.747427    3850 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:12:30.761463    3850 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-366000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-366000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-366000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (77.885ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-366000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-366000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-366000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-366000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-366000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-366000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-366000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-366000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-366000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (46.515417ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-366000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-366000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-366000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-366000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-366000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-366000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-08-19 04:12:30.905093 -0700 PDT m=+2259.125714251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-366000 -n docker-flags-366000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-366000 -n docker-flags-366000: exit status 7 (28.89975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-366000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-366000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-366000
--- FAIL: TestDockerFlags (10.13s)

                                                
                                    
x
+
TestForceSystemdFlag (10.54s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-955000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-955000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.348200583s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-955000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-955000" primary control-plane node in "force-systemd-flag-955000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-955000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:12:15.300964    3829 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:12:15.301097    3829 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:12:15.301100    3829 out.go:358] Setting ErrFile to fd 2...
	I0819 04:12:15.301107    3829 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:12:15.301231    3829 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:12:15.302271    3829 out.go:352] Setting JSON to false
	I0819 04:12:15.318320    3829 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2498,"bootTime":1724063437,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0819 04:12:15.318393    3829 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:12:15.324273    3829 out.go:177] * [force-systemd-flag-955000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:12:15.332197    3829 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 04:12:15.332235    3829 notify.go:220] Checking for updates...
	I0819 04:12:15.338632    3829 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	I0819 04:12:15.342131    3829 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:12:15.346151    3829 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:12:15.347532    3829 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	I0819 04:12:15.351156    3829 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:12:15.354459    3829 config.go:182] Loaded profile config "force-systemd-env-413000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:12:15.354528    3829 config.go:182] Loaded profile config "multinode-837000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:12:15.354583    3829 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:12:15.359037    3829 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 04:12:15.366146    3829 start.go:297] selected driver: qemu2
	I0819 04:12:15.366153    3829 start.go:901] validating driver "qemu2" against <nil>
	I0819 04:12:15.366168    3829 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:12:15.368328    3829 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 04:12:15.372207    3829 out.go:177] * Automatically selected the socket_vmnet network
	I0819 04:12:15.375263    3829 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 04:12:15.375284    3829 cni.go:84] Creating CNI manager for ""
	I0819 04:12:15.375295    3829 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:12:15.375300    3829 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 04:12:15.375336    3829 start.go:340] cluster config:
	{Name:force-systemd-flag-955000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-955000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:12:15.378924    3829 iso.go:125] acquiring lock: {Name:mk9bbf20f477d4c64990a7e4e7281f35cf7cfcc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:12:15.385062    3829 out.go:177] * Starting "force-systemd-flag-955000" primary control-plane node in "force-systemd-flag-955000" cluster
	I0819 04:12:15.389170    3829 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:12:15.389186    3829 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:12:15.389194    3829 cache.go:56] Caching tarball of preloaded images
	I0819 04:12:15.389256    3829 preload.go:172] Found /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:12:15.389267    3829 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:12:15.389335    3829 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/force-systemd-flag-955000/config.json ...
	I0819 04:12:15.389349    3829 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/force-systemd-flag-955000/config.json: {Name:mk2cbe8b46a8743ecfc1656c15f4b09a9177189f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:12:15.389590    3829 start.go:360] acquireMachinesLock for force-systemd-flag-955000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:12:15.389626    3829 start.go:364] duration metric: took 29.625µs to acquireMachinesLock for "force-systemd-flag-955000"
	I0819 04:12:15.389639    3829 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-955000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-955000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:12:15.389669    3829 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:12:15.397130    3829 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0819 04:12:15.414731    3829 start.go:159] libmachine.API.Create for "force-systemd-flag-955000" (driver="qemu2")
	I0819 04:12:15.414767    3829 client.go:168] LocalClient.Create starting
	I0819 04:12:15.414829    3829 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem
	I0819 04:12:15.414858    3829 main.go:141] libmachine: Decoding PEM data...
	I0819 04:12:15.414871    3829 main.go:141] libmachine: Parsing certificate...
	I0819 04:12:15.414910    3829 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/cert.pem
	I0819 04:12:15.414936    3829 main.go:141] libmachine: Decoding PEM data...
	I0819 04:12:15.414943    3829 main.go:141] libmachine: Parsing certificate...
	I0819 04:12:15.415320    3829 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19476-967/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19476-967/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:12:15.566550    3829 main.go:141] libmachine: Creating SSH key...
	I0819 04:12:15.672989    3829 main.go:141] libmachine: Creating Disk image...
	I0819 04:12:15.672995    3829 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:12:15.673175    3829 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19476-967/.minikube/machines/force-systemd-flag-955000/disk.qcow2.raw /Users/jenkins/minikube-integration/19476-967/.minikube/machines/force-systemd-flag-955000/disk.qcow2
	I0819 04:12:15.682345    3829 main.go:141] libmachine: STDOUT: 
	I0819 04:12:15.682362    3829 main.go:141] libmachine: STDERR: 
	I0819 04:12:15.682409    3829 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/force-systemd-flag-955000/disk.qcow2 +20000M
	I0819 04:12:15.690146    3829 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:12:15.690159    3829 main.go:141] libmachine: STDERR: 
	I0819 04:12:15.690182    3829 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19476-967/.minikube/machines/force-systemd-flag-955000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19476-967/.minikube/machines/force-systemd-flag-955000/disk.qcow2
	I0819 04:12:15.690188    3829 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:12:15.690198    3829 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:12:15.690222    3829 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/force-systemd-flag-955000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/force-systemd-flag-955000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/force-systemd-flag-955000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:0c:b5:60:37:f9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/force-systemd-flag-955000/disk.qcow2
	I0819 04:12:15.691778    3829 main.go:141] libmachine: STDOUT: 
	I0819 04:12:15.691800    3829 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:12:15.691816    3829 client.go:171] duration metric: took 277.047833ms to LocalClient.Create
	I0819 04:12:17.693962    3829 start.go:128] duration metric: took 2.304307458s to createHost
	I0819 04:12:17.694075    3829 start.go:83] releasing machines lock for "force-systemd-flag-955000", held for 2.304433s
	W0819 04:12:17.694135    3829 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:12:17.701486    3829 out.go:177] * Deleting "force-systemd-flag-955000" in qemu2 ...
	W0819 04:12:17.730712    3829 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:12:17.730741    3829 start.go:729] Will try again in 5 seconds ...
	I0819 04:12:22.731560    3829 start.go:360] acquireMachinesLock for force-systemd-flag-955000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:12:23.265570    3829 start.go:364] duration metric: took 533.915417ms to acquireMachinesLock for "force-systemd-flag-955000"
	I0819 04:12:23.265748    3829 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-955000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-955000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:12:23.266058    3829 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:12:23.280739    3829 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0819 04:12:23.331618    3829 start.go:159] libmachine.API.Create for "force-systemd-flag-955000" (driver="qemu2")
	I0819 04:12:23.331667    3829 client.go:168] LocalClient.Create starting
	I0819 04:12:23.331786    3829 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem
	I0819 04:12:23.331846    3829 main.go:141] libmachine: Decoding PEM data...
	I0819 04:12:23.331863    3829 main.go:141] libmachine: Parsing certificate...
	I0819 04:12:23.331921    3829 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/cert.pem
	I0819 04:12:23.331964    3829 main.go:141] libmachine: Decoding PEM data...
	I0819 04:12:23.331975    3829 main.go:141] libmachine: Parsing certificate...
	I0819 04:12:23.332563    3829 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19476-967/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19476-967/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:12:23.494677    3829 main.go:141] libmachine: Creating SSH key...
	I0819 04:12:23.556482    3829 main.go:141] libmachine: Creating Disk image...
	I0819 04:12:23.556488    3829 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:12:23.557205    3829 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19476-967/.minikube/machines/force-systemd-flag-955000/disk.qcow2.raw /Users/jenkins/minikube-integration/19476-967/.minikube/machines/force-systemd-flag-955000/disk.qcow2
	I0819 04:12:23.566262    3829 main.go:141] libmachine: STDOUT: 
	I0819 04:12:23.566284    3829 main.go:141] libmachine: STDERR: 
	I0819 04:12:23.566327    3829 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/force-systemd-flag-955000/disk.qcow2 +20000M
	I0819 04:12:23.574095    3829 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:12:23.574111    3829 main.go:141] libmachine: STDERR: 
	I0819 04:12:23.574124    3829 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19476-967/.minikube/machines/force-systemd-flag-955000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19476-967/.minikube/machines/force-systemd-flag-955000/disk.qcow2
	I0819 04:12:23.574127    3829 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:12:23.574142    3829 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:12:23.574192    3829 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/force-systemd-flag-955000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/force-systemd-flag-955000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/force-systemd-flag-955000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:60:69:ab:40:5b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/force-systemd-flag-955000/disk.qcow2
	I0819 04:12:23.575824    3829 main.go:141] libmachine: STDOUT: 
	I0819 04:12:23.575844    3829 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:12:23.575856    3829 client.go:171] duration metric: took 244.187875ms to LocalClient.Create
	I0819 04:12:25.578013    3829 start.go:128] duration metric: took 2.311944166s to createHost
	I0819 04:12:25.578116    3829 start.go:83] releasing machines lock for "force-systemd-flag-955000", held for 2.312466958s
	W0819 04:12:25.578453    3829 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-955000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-955000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:12:25.591084    3829 out.go:201] 
	W0819 04:12:25.595224    3829 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:12:25.595306    3829 out.go:270] * 
	* 
	W0819 04:12:25.597660    3829 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:12:25.608085    3829 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-955000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-955000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-955000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (75.159583ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-955000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-955000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-955000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-08-19 04:12:25.70055 -0700 PDT m=+2253.921100001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-955000 -n force-systemd-flag-955000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-955000 -n force-systemd-flag-955000: exit status 7 (34.264791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-955000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-955000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-955000
--- FAIL: TestForceSystemdFlag (10.54s)

                                                
                                    
x
+
TestForceSystemdEnv (10.21s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-413000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-413000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.026796417s)

                                                
                                                
-- stdout --
	* [force-systemd-env-413000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-413000" primary control-plane node in "force-systemd-env-413000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-413000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:12:10.694813    3807 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:12:10.694922    3807 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:12:10.694925    3807 out.go:358] Setting ErrFile to fd 2...
	I0819 04:12:10.694927    3807 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:12:10.695065    3807 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:12:10.696080    3807 out.go:352] Setting JSON to false
	I0819 04:12:10.712576    3807 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2493,"bootTime":1724063437,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0819 04:12:10.712660    3807 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:12:10.717225    3807 out.go:177] * [force-systemd-env-413000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:12:10.725201    3807 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 04:12:10.725284    3807 notify.go:220] Checking for updates...
	I0819 04:12:10.730380    3807 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	I0819 04:12:10.733131    3807 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:12:10.736202    3807 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:12:10.739187    3807 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	I0819 04:12:10.742169    3807 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0819 04:12:10.745577    3807 config.go:182] Loaded profile config "multinode-837000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:12:10.745621    3807 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:12:10.750213    3807 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 04:12:10.757139    3807 start.go:297] selected driver: qemu2
	I0819 04:12:10.757146    3807 start.go:901] validating driver "qemu2" against <nil>
	I0819 04:12:10.757155    3807 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:12:10.759532    3807 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 04:12:10.762158    3807 out.go:177] * Automatically selected the socket_vmnet network
	I0819 04:12:10.765240    3807 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 04:12:10.765272    3807 cni.go:84] Creating CNI manager for ""
	I0819 04:12:10.765280    3807 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:12:10.765284    3807 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 04:12:10.765313    3807 start.go:340] cluster config:
	{Name:force-systemd-env-413000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:12:10.768909    3807 iso.go:125] acquiring lock: {Name:mk9bbf20f477d4c64990a7e4e7281f35cf7cfcc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:12:10.775961    3807 out.go:177] * Starting "force-systemd-env-413000" primary control-plane node in "force-systemd-env-413000" cluster
	I0819 04:12:10.780133    3807 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:12:10.780159    3807 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:12:10.780167    3807 cache.go:56] Caching tarball of preloaded images
	I0819 04:12:10.780252    3807 preload.go:172] Found /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:12:10.780260    3807 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:12:10.780313    3807 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/force-systemd-env-413000/config.json ...
	I0819 04:12:10.780322    3807 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/force-systemd-env-413000/config.json: {Name:mka4391fc2f65bb29bd36b61c2ffb5f740ad4599 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:12:10.780554    3807 start.go:360] acquireMachinesLock for force-systemd-env-413000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:12:10.780587    3807 start.go:364] duration metric: took 25.208µs to acquireMachinesLock for "force-systemd-env-413000"
	I0819 04:12:10.780600    3807 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-413000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:12:10.780625    3807 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:12:10.788137    3807 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0819 04:12:10.803794    3807 start.go:159] libmachine.API.Create for "force-systemd-env-413000" (driver="qemu2")
	I0819 04:12:10.803811    3807 client.go:168] LocalClient.Create starting
	I0819 04:12:10.803890    3807 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem
	I0819 04:12:10.803921    3807 main.go:141] libmachine: Decoding PEM data...
	I0819 04:12:10.803930    3807 main.go:141] libmachine: Parsing certificate...
	I0819 04:12:10.803967    3807 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/cert.pem
	I0819 04:12:10.803989    3807 main.go:141] libmachine: Decoding PEM data...
	I0819 04:12:10.803997    3807 main.go:141] libmachine: Parsing certificate...
	I0819 04:12:10.804329    3807 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19476-967/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19476-967/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:12:10.950146    3807 main.go:141] libmachine: Creating SSH key...
	I0819 04:12:11.087695    3807 main.go:141] libmachine: Creating Disk image...
	I0819 04:12:11.087707    3807 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:12:11.087892    3807 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19476-967/.minikube/machines/force-systemd-env-413000/disk.qcow2.raw /Users/jenkins/minikube-integration/19476-967/.minikube/machines/force-systemd-env-413000/disk.qcow2
	I0819 04:12:11.097595    3807 main.go:141] libmachine: STDOUT: 
	I0819 04:12:11.097616    3807 main.go:141] libmachine: STDERR: 
	I0819 04:12:11.097673    3807 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/force-systemd-env-413000/disk.qcow2 +20000M
	I0819 04:12:11.105874    3807 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:12:11.105893    3807 main.go:141] libmachine: STDERR: 
	I0819 04:12:11.105905    3807 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19476-967/.minikube/machines/force-systemd-env-413000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19476-967/.minikube/machines/force-systemd-env-413000/disk.qcow2
	I0819 04:12:11.105910    3807 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:12:11.105923    3807 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:12:11.105950    3807 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/force-systemd-env-413000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/force-systemd-env-413000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/force-systemd-env-413000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:db:42:d6:47:36 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/force-systemd-env-413000/disk.qcow2
	I0819 04:12:11.107641    3807 main.go:141] libmachine: STDOUT: 
	I0819 04:12:11.107660    3807 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:12:11.107683    3807 client.go:171] duration metric: took 303.872042ms to LocalClient.Create
	I0819 04:12:13.109867    3807 start.go:128] duration metric: took 2.329246166s to createHost
	I0819 04:12:13.109941    3807 start.go:83] releasing machines lock for "force-systemd-env-413000", held for 2.329376083s
	W0819 04:12:13.109992    3807 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:12:13.121174    3807 out.go:177] * Deleting "force-systemd-env-413000" in qemu2 ...
	W0819 04:12:13.149900    3807 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:12:13.149920    3807 start.go:729] Will try again in 5 seconds ...
	I0819 04:12:18.152010    3807 start.go:360] acquireMachinesLock for force-systemd-env-413000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:12:18.152488    3807 start.go:364] duration metric: took 383.667µs to acquireMachinesLock for "force-systemd-env-413000"
	I0819 04:12:18.152638    3807 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-413000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:12:18.152866    3807 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:12:18.160289    3807 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0819 04:12:18.211999    3807 start.go:159] libmachine.API.Create for "force-systemd-env-413000" (driver="qemu2")
	I0819 04:12:18.212075    3807 client.go:168] LocalClient.Create starting
	I0819 04:12:18.212180    3807 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem
	I0819 04:12:18.212242    3807 main.go:141] libmachine: Decoding PEM data...
	I0819 04:12:18.212259    3807 main.go:141] libmachine: Parsing certificate...
	I0819 04:12:18.212337    3807 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/cert.pem
	I0819 04:12:18.212382    3807 main.go:141] libmachine: Decoding PEM data...
	I0819 04:12:18.212397    3807 main.go:141] libmachine: Parsing certificate...
	I0819 04:12:18.212924    3807 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19476-967/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19476-967/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:12:18.383145    3807 main.go:141] libmachine: Creating SSH key...
	I0819 04:12:18.626674    3807 main.go:141] libmachine: Creating Disk image...
	I0819 04:12:18.626685    3807 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:12:18.626880    3807 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19476-967/.minikube/machines/force-systemd-env-413000/disk.qcow2.raw /Users/jenkins/minikube-integration/19476-967/.minikube/machines/force-systemd-env-413000/disk.qcow2
	I0819 04:12:18.636144    3807 main.go:141] libmachine: STDOUT: 
	I0819 04:12:18.636167    3807 main.go:141] libmachine: STDERR: 
	I0819 04:12:18.636217    3807 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/force-systemd-env-413000/disk.qcow2 +20000M
	I0819 04:12:18.644215    3807 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:12:18.644246    3807 main.go:141] libmachine: STDERR: 
	I0819 04:12:18.644263    3807 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19476-967/.minikube/machines/force-systemd-env-413000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19476-967/.minikube/machines/force-systemd-env-413000/disk.qcow2
	I0819 04:12:18.644269    3807 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:12:18.644274    3807 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:12:18.644315    3807 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/force-systemd-env-413000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/force-systemd-env-413000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/force-systemd-env-413000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:ef:12:8b:0d:54 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/force-systemd-env-413000/disk.qcow2
	I0819 04:12:18.645962    3807 main.go:141] libmachine: STDOUT: 
	I0819 04:12:18.645976    3807 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:12:18.645990    3807 client.go:171] duration metric: took 433.915959ms to LocalClient.Create
	I0819 04:12:20.648136    3807 start.go:128] duration metric: took 2.495239167s to createHost
	I0819 04:12:20.648213    3807 start.go:83] releasing machines lock for "force-systemd-env-413000", held for 2.495732333s
	W0819 04:12:20.648640    3807 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-413000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-413000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:12:20.662194    3807 out.go:201] 
	W0819 04:12:20.666324    3807 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:12:20.666385    3807 out.go:270] * 
	* 
	W0819 04:12:20.669004    3807 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:12:20.679137    3807 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-413000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-413000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-413000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (76.109667ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-413000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-413000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-413000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-08-19 04:12:20.772444 -0700 PDT m=+2248.992925959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-413000 -n force-systemd-env-413000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-413000 -n force-systemd-env-413000: exit status 7 (33.570584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-413000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-413000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-413000
--- FAIL: TestForceSystemdEnv (10.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (37.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-522000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-522000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-h7cln" [d702251c-ba49-424f-ace5-1d1bfdc53a30] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-h7cln" [d702251c-ba49-424f-ace5-1d1bfdc53a30] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.008637541s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.105.4:31528
functional_test.go:1661: error fetching http://192.168.105.4:31528: Get "http://192.168.105.4:31528": dial tcp 192.168.105.4:31528: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31528: Get "http://192.168.105.4:31528": dial tcp 192.168.105.4:31528: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31528: Get "http://192.168.105.4:31528": dial tcp 192.168.105.4:31528: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31528: Get "http://192.168.105.4:31528": dial tcp 192.168.105.4:31528: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31528: Get "http://192.168.105.4:31528": dial tcp 192.168.105.4:31528: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31528: Get "http://192.168.105.4:31528": dial tcp 192.168.105.4:31528: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31528: Get "http://192.168.105.4:31528": dial tcp 192.168.105.4:31528: connect: connection refused
functional_test.go:1681: failed to fetch http://192.168.105.4:31528: Get "http://192.168.105.4:31528": dial tcp 192.168.105.4:31528: connect: connection refused
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-522000 describe po hello-node-connect
functional_test.go:1606: hello-node pod describe:
Name:             hello-node-connect-65d86f57f4-h7cln
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-522000/192.168.105.4
Start Time:       Mon, 19 Aug 2024 03:45:30 -0700
Labels:           app=hello-node-connect
pod-template-hash=65d86f57f4
Annotations:      <none>
Status:           Running
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-65d86f57f4
Containers:
echoserver-arm:
Container ID:   docker://c97b90bcc542a2bd94f9d19969e940305dc91fe755befd27580f96b406ff6a68
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Mon, 19 Aug 2024 03:45:48 -0700
Finished:     Mon, 19 Aug 2024 03:45:49 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hpzk8 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-hpzk8:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  35s                default-scheduler  Successfully assigned default/hello-node-connect-65d86f57f4-h7cln to functional-522000
Normal   Pulling    36s                kubelet            Pulling image "registry.k8s.io/echoserver-arm:1.8"
Normal   Pulled     31s                kubelet            Successfully pulled image "registry.k8s.io/echoserver-arm:1.8" in 4.349s (4.349s including waiting). Image size: 84957542 bytes.
Normal   Created    18s (x3 over 31s)  kubelet            Created container echoserver-arm
Normal   Pulled     18s (x2 over 31s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Started    17s (x3 over 31s)  kubelet            Started container echoserver-arm
Warning  BackOff    5s (x4 over 30s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-65d86f57f4-h7cln_default(d702251c-ba49-424f-ace5-1d1bfdc53a30)

                                                
                                                
functional_test.go:1608: (dbg) Run:  kubectl --context functional-522000 logs -l app=hello-node-connect
functional_test.go:1612: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1614: (dbg) Run:  kubectl --context functional-522000 describe svc hello-node-connect
functional_test.go:1618: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.103.220.93
IPs:                      10.103.220.93
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31528/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-522000 -n functional-522000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh       | functional-522000 ssh findmnt                                                                                        | functional-522000 | jenkins | v1.33.1 | 19 Aug 24 03:45 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-522000                                                                                                 | functional-522000 | jenkins | v1.33.1 | 19 Aug 24 03:45 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port973919382/001:/mount-9p       |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-522000 ssh findmnt                                                                                        | functional-522000 | jenkins | v1.33.1 | 19 Aug 24 03:45 PDT | 19 Aug 24 03:45 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-522000 ssh -- ls                                                                                          | functional-522000 | jenkins | v1.33.1 | 19 Aug 24 03:45 PDT | 19 Aug 24 03:45 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-522000 ssh cat                                                                                            | functional-522000 | jenkins | v1.33.1 | 19 Aug 24 03:45 PDT | 19 Aug 24 03:45 PDT |
	|           | /mount-9p/test-1724064351948842000                                                                                   |                   |         |         |                     |                     |
	| ssh       | functional-522000 ssh stat                                                                                           | functional-522000 | jenkins | v1.33.1 | 19 Aug 24 03:45 PDT | 19 Aug 24 03:45 PDT |
	|           | /mount-9p/created-by-test                                                                                            |                   |         |         |                     |                     |
	| ssh       | functional-522000 ssh stat                                                                                           | functional-522000 | jenkins | v1.33.1 | 19 Aug 24 03:45 PDT | 19 Aug 24 03:45 PDT |
	|           | /mount-9p/created-by-pod                                                                                             |                   |         |         |                     |                     |
	| ssh       | functional-522000 ssh sudo                                                                                           | functional-522000 | jenkins | v1.33.1 | 19 Aug 24 03:45 PDT | 19 Aug 24 03:45 PDT |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-522000 ssh findmnt                                                                                        | functional-522000 | jenkins | v1.33.1 | 19 Aug 24 03:45 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-522000                                                                                                 | functional-522000 | jenkins | v1.33.1 | 19 Aug 24 03:45 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port3461367497/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-522000 ssh findmnt                                                                                        | functional-522000 | jenkins | v1.33.1 | 19 Aug 24 03:45 PDT | 19 Aug 24 03:45 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-522000 ssh -- ls                                                                                          | functional-522000 | jenkins | v1.33.1 | 19 Aug 24 03:45 PDT | 19 Aug 24 03:45 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-522000 ssh sudo                                                                                           | functional-522000 | jenkins | v1.33.1 | 19 Aug 24 03:45 PDT |                     |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount     | -p functional-522000                                                                                                 | functional-522000 | jenkins | v1.33.1 | 19 Aug 24 03:45 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2424060481/001:/mount1   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-522000                                                                                                 | functional-522000 | jenkins | v1.33.1 | 19 Aug 24 03:45 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2424060481/001:/mount3   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-522000                                                                                                 | functional-522000 | jenkins | v1.33.1 | 19 Aug 24 03:45 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2424060481/001:/mount2   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-522000 ssh findmnt                                                                                        | functional-522000 | jenkins | v1.33.1 | 19 Aug 24 03:45 PDT |                     |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-522000 ssh findmnt                                                                                        | functional-522000 | jenkins | v1.33.1 | 19 Aug 24 03:45 PDT | 19 Aug 24 03:45 PDT |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-522000 ssh findmnt                                                                                        | functional-522000 | jenkins | v1.33.1 | 19 Aug 24 03:45 PDT | 19 Aug 24 03:45 PDT |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-522000 ssh findmnt                                                                                        | functional-522000 | jenkins | v1.33.1 | 19 Aug 24 03:45 PDT | 19 Aug 24 03:45 PDT |
	|           | -T /mount3                                                                                                           |                   |         |         |                     |                     |
	| mount     | -p functional-522000                                                                                                 | functional-522000 | jenkins | v1.33.1 | 19 Aug 24 03:46 PDT |                     |
	|           | --kill=true                                                                                                          |                   |         |         |                     |                     |
	| start     | -p functional-522000                                                                                                 | functional-522000 | jenkins | v1.33.1 | 19 Aug 24 03:46 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-522000                                                                                                 | functional-522000 | jenkins | v1.33.1 | 19 Aug 24 03:46 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-522000 --dry-run                                                                                       | functional-522000 | jenkins | v1.33.1 | 19 Aug 24 03:46 PDT |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                   | functional-522000 | jenkins | v1.33.1 | 19 Aug 24 03:46 PDT |                     |
	|           | -p functional-522000                                                                                                 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 03:46:00
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 03:46:00.548370    2111 out.go:345] Setting OutFile to fd 1 ...
	I0819 03:46:00.548505    2111 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 03:46:00.548508    2111 out.go:358] Setting ErrFile to fd 2...
	I0819 03:46:00.548511    2111 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 03:46:00.548650    2111 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 03:46:00.549716    2111 out.go:352] Setting JSON to false
	I0819 03:46:00.567047    2111 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":923,"bootTime":1724063437,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0819 03:46:00.567134    2111 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 03:46:00.572227    2111 out.go:177] * [functional-522000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 03:46:00.579256    2111 notify.go:220] Checking for updates...
	I0819 03:46:00.583220    2111 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 03:46:00.586263    2111 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	I0819 03:46:00.589192    2111 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 03:46:00.593214    2111 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 03:46:00.597171    2111 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	I0819 03:46:00.600190    2111 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 03:46:00.603688    2111 config.go:182] Loaded profile config "functional-522000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 03:46:00.603953    2111 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 03:46:00.608075    2111 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 03:46:00.615281    2111 start.go:297] selected driver: qemu2
	I0819 03:46:00.615290    2111 start.go:901] validating driver "qemu2" against &{Name:functional-522000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-522000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 03:46:00.615348    2111 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 03:46:00.617704    2111 cni.go:84] Creating CNI manager for ""
	I0819 03:46:00.617722    2111 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 03:46:00.617765    2111 start.go:340] cluster config:
	{Name:functional-522000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-522000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 03:46:00.629200    2111 out.go:177] * dry-run validation complete!
	
	
	==> Docker <==
	Aug 19 10:45:59 functional-522000 dockerd[5949]: time="2024-08-19T10:45:59.954708032Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 10:45:59 functional-522000 dockerd[5949]: time="2024-08-19T10:45:59.954737407Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 10:45:59 functional-522000 dockerd[5949]: time="2024-08-19T10:45:59.954742824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 10:45:59 functional-522000 dockerd[5949]: time="2024-08-19T10:45:59.954786699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 10:45:59 functional-522000 dockerd[5943]: time="2024-08-19T10:45:59.982682924Z" level=info msg="ignoring event" container=223b065468bfc5a50c991fd3cb0e3b6a2623ad55159756cb8f83f6b1f7a7ff92 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 10:45:59 functional-522000 dockerd[5949]: time="2024-08-19T10:45:59.982719799Z" level=info msg="shim disconnected" id=223b065468bfc5a50c991fd3cb0e3b6a2623ad55159756cb8f83f6b1f7a7ff92 namespace=moby
	Aug 19 10:45:59 functional-522000 dockerd[5949]: time="2024-08-19T10:45:59.982858549Z" level=warning msg="cleaning up after shim disconnected" id=223b065468bfc5a50c991fd3cb0e3b6a2623ad55159756cb8f83f6b1f7a7ff92 namespace=moby
	Aug 19 10:45:59 functional-522000 dockerd[5949]: time="2024-08-19T10:45:59.982863340Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 10:46:01 functional-522000 dockerd[5949]: time="2024-08-19T10:46:01.552372602Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 10:46:01 functional-522000 dockerd[5949]: time="2024-08-19T10:46:01.552406019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 10:46:01 functional-522000 dockerd[5949]: time="2024-08-19T10:46:01.552411394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 10:46:01 functional-522000 dockerd[5949]: time="2024-08-19T10:46:01.552444644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 10:46:01 functional-522000 dockerd[5949]: time="2024-08-19T10:46:01.561784983Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 10:46:01 functional-522000 dockerd[5949]: time="2024-08-19T10:46:01.561893691Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 10:46:01 functional-522000 dockerd[5949]: time="2024-08-19T10:46:01.561917900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 10:46:01 functional-522000 dockerd[5949]: time="2024-08-19T10:46:01.561974483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 10:46:01 functional-522000 cri-dockerd[6202]: time="2024-08-19T10:46:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bdd64d82d16897a4c9d92179c94ac5a71d95bffd6af43cd6048c4c60d3b8f8e9/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 19 10:46:01 functional-522000 cri-dockerd[6202]: time="2024-08-19T10:46:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8b0460299449a64d838f3e549fb4f331f82bb665d6d37e2549b9c1edfa4544b8/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 19 10:46:01 functional-522000 dockerd[5943]: time="2024-08-19T10:46:01.867860582Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Aug 19 10:46:03 functional-522000 cri-dockerd[6202]: time="2024-08-19T10:46:03Z" level=info msg="Stop pulling image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: Status: Downloaded newer image for kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Aug 19 10:46:03 functional-522000 dockerd[5949]: time="2024-08-19T10:46:03.389590679Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 10:46:03 functional-522000 dockerd[5949]: time="2024-08-19T10:46:03.389636679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 10:46:03 functional-522000 dockerd[5949]: time="2024-08-19T10:46:03.389647346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 10:46:03 functional-522000 dockerd[5949]: time="2024-08-19T10:46:03.389689221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 10:46:03 functional-522000 dockerd[5943]: time="2024-08-19T10:46:03.560794618Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                  CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	43366a39d6e93       kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c   3 seconds ago        Running             dashboard-metrics-scraper   0                   bdd64d82d1689       dashboard-metrics-scraper-c5db448b4-8t2m2
	223b065468bfc       72565bf5bbedf                                                                                          7 seconds ago        Exited              echoserver-arm              2                   6a090e174be3c       hello-node-64b4f8f9ff-29qvb
	97a08844c7d7f       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e    12 seconds ago       Exited              mount-munger                0                   6b49e2a9cb2e5       busybox-mount
	c97b90bcc542a       72565bf5bbedf                                                                                          18 seconds ago       Exited              echoserver-arm              2                   ef1730e8b9501       hello-node-connect-65d86f57f4-h7cln
	96307c0008036       nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add                          29 seconds ago       Running             myfrontend                  0                   b2260ab2cbbfa       sp-pod
	f5a3a60a4a113       nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158                          43 seconds ago       Running             nginx                       0                   77918ed62b6be       nginx-svc
	efae0c6bfed9c       2437cf7621777                                                                                          About a minute ago   Running             coredns                     2                   ba58871a29ff6       coredns-6f6b679f8f-p7zrp
	f85ef1718c33e       71d55d66fd4ee                                                                                          About a minute ago   Running             kube-proxy                  2                   d93aca0eafa9a       kube-proxy-zpqxj
	48c612e9d3feb       ba04bb24b9575                                                                                          About a minute ago   Running             storage-provisioner         2                   790084f3abd47       storage-provisioner
	3f14df5846a9d       fcb0683e6bdbd                                                                                          About a minute ago   Running             kube-controller-manager     2                   299de9dd9ffe9       kube-controller-manager-functional-522000
	f20616c963ad9       fbbbd428abb4d                                                                                          About a minute ago   Running             kube-scheduler              2                   f0e9397e88dfc       kube-scheduler-functional-522000
	6121c6580ddb8       27e3830e14027                                                                                          About a minute ago   Running             etcd                        2                   2dd212ee5bf41       etcd-functional-522000
	47d43404566bf       cd0f0ae0ec9e0                                                                                          About a minute ago   Running             kube-apiserver              0                   9ff498303dc2a       kube-apiserver-functional-522000
	c86e6d78d553f       2437cf7621777                                                                                          About a minute ago   Exited              coredns                     1                   4b9211c2d2fdc       coredns-6f6b679f8f-p7zrp
	53d5605f71387       71d55d66fd4ee                                                                                          About a minute ago   Exited              kube-proxy                  1                   c86374c7a07c4       kube-proxy-zpqxj
	dbcc50348397b       ba04bb24b9575                                                                                          About a minute ago   Exited              storage-provisioner         1                   73dc231699a0d       storage-provisioner
	2ddfb1f250d22       27e3830e14027                                                                                          About a minute ago   Exited              etcd                        1                   e9f208d33a5fb       etcd-functional-522000
	ef5ea3f56183b       fbbbd428abb4d                                                                                          About a minute ago   Exited              kube-scheduler              1                   4fac6f067fe8e       kube-scheduler-functional-522000
	1f7c574cdbf8b       fcb0683e6bdbd                                                                                          About a minute ago   Exited              kube-controller-manager     1                   f2b998d593924       kube-controller-manager-functional-522000
	
	
	==> coredns [c86e6d78d553] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:53598 - 48925 "HINFO IN 7143406990486518342.9055163655494975359. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.011245992s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [efae0c6bfed9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:60615 - 48877 "HINFO IN 1649220711199685159.6515074652453982271. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009115638s
	[INFO] 10.244.0.1:38546 - 36880 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.00012575s
	[INFO] 10.244.0.1:26933 - 20530 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000100875s
	[INFO] 10.244.0.1:55442 - 63630 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.000911044s
	[INFO] 10.244.0.1:61852 - 7777 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000101417s
	[INFO] 10.244.0.1:21559 - 64588 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000060001s
	[INFO] 10.244.0.1:20473 - 46614 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000123042s
	
	
	==> describe nodes <==
	Name:               functional-522000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-522000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7871dd89d2a8218fd3bbcc542b116f963c0d9934
	                    minikube.k8s.io/name=functional-522000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T03_43_18_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 10:43:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-522000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 10:45:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 10:45:57 +0000   Mon, 19 Aug 2024 10:43:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 10:45:57 +0000   Mon, 19 Aug 2024 10:43:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 10:45:57 +0000   Mon, 19 Aug 2024 10:43:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 10:45:57 +0000   Mon, 19 Aug 2024 10:43:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-522000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904744Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904744Ki
	  pods:               110
	System Info:
	  Machine ID:                 f8f752442d2d4b2789aa8a9e0cda2328
	  System UUID:                f8f752442d2d4b2789aa8a9e0cda2328
	  Boot ID:                    2e764b7a-c7f9-4b03-b2ed-585f6829ff0f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-64b4f8f9ff-29qvb                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  default                     hello-node-connect-65d86f57f4-h7cln          0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 coredns-6f6b679f8f-p7zrp                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     2m44s
	  kube-system                 etcd-functional-522000                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         2m49s
	  kube-system                 kube-apiserver-functional-522000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         70s
	  kube-system                 kube-controller-manager-functional-522000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m49s
	  kube-system                 kube-proxy-zpqxj                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m44s
	  kube-system                 kube-scheduler-functional-522000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m49s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m43s
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-8t2m2    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-9zwb4        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m42s                kube-proxy       
	  Normal  Starting                 69s                  kube-proxy       
	  Normal  Starting                 109s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  2m49s                kubelet          Node functional-522000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  2m49s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    2m49s                kubelet          Node functional-522000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m49s                kubelet          Node functional-522000 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m49s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m45s                node-controller  Node functional-522000 event: Registered Node functional-522000 in Controller
	  Normal  NodeReady                2m45s                kubelet          Node functional-522000 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    114s (x8 over 114s)  kubelet          Node functional-522000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  114s (x8 over 114s)  kubelet          Node functional-522000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 114s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     114s (x7 over 114s)  kubelet          Node functional-522000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  114s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           108s                 node-controller  Node functional-522000 event: Registered Node functional-522000 in Controller
	  Normal  Starting                 75s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  75s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  74s (x8 over 75s)    kubelet          Node functional-522000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    74s (x8 over 75s)    kubelet          Node functional-522000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     74s (x7 over 75s)    kubelet          Node functional-522000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           68s                  node-controller  Node functional-522000 event: Registered Node functional-522000 in Controller
	
	
	==> dmesg <==
	[  +3.411448] kauditd_printk_skb: 199 callbacks suppressed
	[  +8.986701] kauditd_printk_skb: 33 callbacks suppressed
	[  +1.699755] systemd-fstab-generator[5042]: Ignoring "noauto" option for root device
	[ +10.848950] systemd-fstab-generator[5470]: Ignoring "noauto" option for root device
	[  +0.054362] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.097899] systemd-fstab-generator[5505]: Ignoring "noauto" option for root device
	[  +0.117646] systemd-fstab-generator[5517]: Ignoring "noauto" option for root device
	[  +0.085381] systemd-fstab-generator[5531]: Ignoring "noauto" option for root device
	[  +5.147275] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.416308] systemd-fstab-generator[6150]: Ignoring "noauto" option for root device
	[  +0.068576] systemd-fstab-generator[6162]: Ignoring "noauto" option for root device
	[  +0.072563] systemd-fstab-generator[6174]: Ignoring "noauto" option for root device
	[  +0.093005] systemd-fstab-generator[6189]: Ignoring "noauto" option for root device
	[  +0.214077] systemd-fstab-generator[6364]: Ignoring "noauto" option for root device
	[  +0.935638] systemd-fstab-generator[6487]: Ignoring "noauto" option for root device
	[  +1.299975] kauditd_printk_skb: 194 callbacks suppressed
	[  +5.659480] kauditd_printk_skb: 36 callbacks suppressed
	[Aug19 10:45] systemd-fstab-generator[7469]: Ignoring "noauto" option for root device
	[  +6.495527] kauditd_printk_skb: 28 callbacks suppressed
	[  +7.102267] kauditd_printk_skb: 19 callbacks suppressed
	[  +6.322423] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.591629] kauditd_printk_skb: 11 callbacks suppressed
	[  +9.410061] kauditd_printk_skb: 29 callbacks suppressed
	[  +8.719806] kauditd_printk_skb: 26 callbacks suppressed
	[  +6.478516] kauditd_printk_skb: 15 callbacks suppressed
	
	
	==> etcd [2ddfb1f250d2] <==
	{"level":"info","ts":"2024-08-19T10:44:15.795081Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-19T10:44:15.795167Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-08-19T10:44:15.795207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-08-19T10:44:15.795228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-08-19T10:44:15.795257Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-08-19T10:44:15.795278Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-08-19T10:44:15.799859Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-522000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-19T10:44:15.799935Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T10:44:15.800511Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T10:44:15.800550Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T10:44:15.800590Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T10:44:15.801985Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T10:44:15.801991Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T10:44:15.803773Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T10:44:15.805399Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-08-19T10:44:38.983957Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-19T10:44:38.983989Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-522000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-08-19T10:44:38.984029Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T10:44:38.984069Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T10:44:38.995273Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T10:44:38.995293Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-19T10:44:38.996468Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-08-19T10:44:38.997986Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-19T10:44:38.998015Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-19T10:44:38.998019Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-522000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> etcd [6121c6580ddb] <==
	{"level":"info","ts":"2024-08-19T10:44:53.951666Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-08-19T10:44:53.951725Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T10:44:53.951780Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T10:44:53.954662Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T10:44:53.959002Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-19T10:44:53.959212Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-19T10:44:53.959113Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-19T10:44:53.962938Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-19T10:44:53.962983Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-19T10:44:55.534852Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-19T10:44:55.534997Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-19T10:44:55.535045Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-08-19T10:44:55.535082Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-08-19T10:44:55.535099Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-08-19T10:44:55.535123Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-08-19T10:44:55.535143Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-08-19T10:44:55.538320Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-522000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-19T10:44:55.538405Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T10:44:55.538924Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T10:44:55.539145Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T10:44:55.539212Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T10:44:55.540848Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T10:44:55.540848Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T10:44:55.543300Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-08-19T10:44:55.543510Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 10:46:07 up 3 min,  0 users,  load average: 0.87, 0.46, 0.19
	Linux functional-522000 5.10.207 #1 SMP PREEMPT Thu Aug 15 18:35:44 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [47d43404566b] <==
	I0819 10:44:56.131297       1 autoregister_controller.go:144] Starting autoregister controller
	I0819 10:44:56.131300       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0819 10:44:56.131302       1 cache.go:39] Caches are synced for autoregister controller
	I0819 10:44:56.131735       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0819 10:44:56.132300       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0819 10:44:56.168313       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 10:44:56.168357       1 policy_source.go:224] refreshing policies
	I0819 10:44:56.169404       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0819 10:44:56.180666       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0819 10:44:57.032375       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0819 10:44:57.521651       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0819 10:44:57.525575       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0819 10:44:57.538423       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0819 10:44:57.553222       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0819 10:44:57.555268       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0819 10:44:59.573664       1 controller.go:615] quota admission added evaluator for: endpoints
	I0819 10:44:59.778870       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0819 10:45:14.139330       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.103.29.183"}
	I0819 10:45:20.114626       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.100.74.8"}
	I0819 10:45:30.494846       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0819 10:45:30.540224       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.103.220.93"}
	I0819 10:45:44.762850       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.100.52.134"}
	I0819 10:46:01.144791       1 controller.go:615] quota admission added evaluator for: namespaces
	I0819 10:46:01.238399       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.77.163"}
	I0819 10:46:01.271301       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.101.181"}
	
	
	==> kube-controller-manager [1f7c574cdbf8] <==
	I0819 10:44:19.857093       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0819 10:44:19.866752       1 shared_informer.go:320] Caches are synced for resource quota
	I0819 10:44:19.870066       1 shared_informer.go:320] Caches are synced for ephemeral
	I0819 10:44:19.871171       1 shared_informer.go:320] Caches are synced for job
	I0819 10:44:19.872266       1 shared_informer.go:320] Caches are synced for disruption
	I0819 10:44:19.874431       1 shared_informer.go:320] Caches are synced for PVC protection
	I0819 10:44:19.875506       1 shared_informer.go:320] Caches are synced for endpoint
	I0819 10:44:19.889666       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0819 10:44:19.890371       1 shared_informer.go:320] Caches are synced for GC
	I0819 10:44:19.890377       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0819 10:44:19.891607       1 shared_informer.go:320] Caches are synced for resource quota
	I0819 10:44:19.891654       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0819 10:44:19.891683       1 shared_informer.go:320] Caches are synced for taint
	I0819 10:44:19.891753       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0819 10:44:19.891858       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-522000"
	I0819 10:44:19.891905       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0819 10:44:19.941250       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0819 10:44:19.941681       1 shared_informer.go:320] Caches are synced for daemon sets
	I0819 10:44:20.113747       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="172.307481ms"
	I0819 10:44:20.114846       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="45.084µs"
	I0819 10:44:20.307798       1 shared_informer.go:320] Caches are synced for garbage collector
	I0819 10:44:20.341449       1 shared_informer.go:320] Caches are synced for garbage collector
	I0819 10:44:20.341484       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0819 10:44:25.969288       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="10.324386ms"
	I0819 10:44:25.970324       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="996.134µs"
	
	
	==> kube-controller-manager [3f14df5846a9] <==
	I0819 10:45:59.932445       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="27µs"
	I0819 10:46:00.913977       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="24.75µs"
	I0819 10:46:01.175192       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="9.419755ms"
	E0819 10:46:01.175227       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0819 10:46:01.183751       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="7.39963ms"
	E0819 10:46:01.183778       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0819 10:46:01.185331       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="8.850631ms"
	E0819 10:46:01.185347       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0819 10:46:01.186681       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="1.660418ms"
	E0819 10:46:01.186694       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0819 10:46:01.188977       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="2.376001ms"
	E0819 10:46:01.188991       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0819 10:46:01.193927       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="2.325669ms"
	E0819 10:46:01.193946       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0819 10:46:01.200632       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="5.276795ms"
	I0819 10:46:01.208565       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="7.907047ms"
	I0819 10:46:01.208605       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="15.917µs"
	I0819 10:46:01.211496       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="14.042µs"
	I0819 10:46:01.223586       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="7.996671ms"
	I0819 10:46:01.237471       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="13.857758ms"
	I0819 10:46:01.237786       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="294.125µs"
	I0819 10:46:01.242943       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="24.041µs"
	I0819 10:46:01.935811       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="41.333µs"
	I0819 10:46:03.992522       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="4.999756ms"
	I0819 10:46:03.993089       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="17.667µs"
	
	
	==> kube-proxy [53d5605f7138] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 10:44:17.214864       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 10:44:17.218749       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0819 10:44:17.218815       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 10:44:17.226003       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 10:44:17.226014       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 10:44:17.226025       1 server_linux.go:169] "Using iptables Proxier"
	I0819 10:44:17.226724       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 10:44:17.226802       1 server.go:483] "Version info" version="v1.31.0"
	I0819 10:44:17.226806       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 10:44:17.227520       1 config.go:197] "Starting service config controller"
	I0819 10:44:17.227553       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 10:44:17.227578       1 config.go:104] "Starting endpoint slice config controller"
	I0819 10:44:17.227592       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 10:44:17.227739       1 config.go:326] "Starting node config controller"
	I0819 10:44:17.227759       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 10:44:17.328394       1 shared_informer.go:320] Caches are synced for node config
	I0819 10:44:17.328416       1 shared_informer.go:320] Caches are synced for service config
	I0819 10:44:17.328430       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [f85ef1718c33] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 10:44:57.457836       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 10:44:57.461377       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0819 10:44:57.461405       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 10:44:57.472098       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 10:44:57.472119       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 10:44:57.472133       1 server_linux.go:169] "Using iptables Proxier"
	I0819 10:44:57.472682       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 10:44:57.472771       1 server.go:483] "Version info" version="v1.31.0"
	I0819 10:44:57.472785       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 10:44:57.473249       1 config.go:197] "Starting service config controller"
	I0819 10:44:57.473261       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 10:44:57.473270       1 config.go:104] "Starting endpoint slice config controller"
	I0819 10:44:57.473273       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 10:44:57.473466       1 config.go:326] "Starting node config controller"
	I0819 10:44:57.473472       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 10:44:57.573372       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 10:44:57.573372       1 shared_informer.go:320] Caches are synced for service config
	I0819 10:44:57.573498       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [ef5ea3f56183] <==
	I0819 10:44:14.986652       1 serving.go:386] Generated self-signed cert in-memory
	W0819 10:44:16.351852       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0819 10:44:16.351942       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0819 10:44:16.351982       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0819 10:44:16.352001       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0819 10:44:16.370355       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0819 10:44:16.370469       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 10:44:16.373939       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0819 10:44:16.375060       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 10:44:16.375395       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0819 10:44:16.376979       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0819 10:44:16.475511       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0819 10:44:38.956755       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f20616c963ad] <==
	I0819 10:44:54.239170       1 serving.go:386] Generated self-signed cert in-memory
	W0819 10:44:56.092628       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0819 10:44:56.092969       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0819 10:44:56.092998       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0819 10:44:56.093017       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0819 10:44:56.103970       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0819 10:44:56.104228       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 10:44:56.105280       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0819 10:44:56.105312       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 10:44:56.105372       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0819 10:44:56.105400       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0819 10:44:56.206886       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 10:45:52 functional-522000 kubelet[6494]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 10:45:52 functional-522000 kubelet[6494]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 10:45:52 functional-522000 kubelet[6494]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 10:45:52 functional-522000 kubelet[6494]: I0819 10:45:52.991798    6494 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z488g\" (UniqueName: \"kubernetes.io/projected/3e0b4f98-8d6b-4fa3-9ecd-11c9a4f174b6-kube-api-access-z488g\") pod \"busybox-mount\" (UID: \"3e0b4f98-8d6b-4fa3-9ecd-11c9a4f174b6\") " pod="default/busybox-mount"
	Aug 19 10:45:52 functional-522000 kubelet[6494]: I0819 10:45:52.991857    6494 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/3e0b4f98-8d6b-4fa3-9ecd-11c9a4f174b6-test-volume\") pod \"busybox-mount\" (UID: \"3e0b4f98-8d6b-4fa3-9ecd-11c9a4f174b6\") " pod="default/busybox-mount"
	Aug 19 10:45:53 functional-522000 kubelet[6494]: I0819 10:45:53.011139    6494 scope.go:117] "RemoveContainer" containerID="0ec54652029638f802ca5ddbef2be4665c780cd29779f215751e7f4dc54d4984"
	Aug 19 10:45:57 functional-522000 kubelet[6494]: I0819 10:45:57.138720    6494 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z488g\" (UniqueName: \"kubernetes.io/projected/3e0b4f98-8d6b-4fa3-9ecd-11c9a4f174b6-kube-api-access-z488g\") pod \"3e0b4f98-8d6b-4fa3-9ecd-11c9a4f174b6\" (UID: \"3e0b4f98-8d6b-4fa3-9ecd-11c9a4f174b6\") "
	Aug 19 10:45:57 functional-522000 kubelet[6494]: I0819 10:45:57.138744    6494 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/3e0b4f98-8d6b-4fa3-9ecd-11c9a4f174b6-test-volume\") pod \"3e0b4f98-8d6b-4fa3-9ecd-11c9a4f174b6\" (UID: \"3e0b4f98-8d6b-4fa3-9ecd-11c9a4f174b6\") "
	Aug 19 10:45:57 functional-522000 kubelet[6494]: I0819 10:45:57.138787    6494 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e0b4f98-8d6b-4fa3-9ecd-11c9a4f174b6-test-volume" (OuterVolumeSpecName: "test-volume") pod "3e0b4f98-8d6b-4fa3-9ecd-11c9a4f174b6" (UID: "3e0b4f98-8d6b-4fa3-9ecd-11c9a4f174b6"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Aug 19 10:45:57 functional-522000 kubelet[6494]: I0819 10:45:57.141826    6494 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e0b4f98-8d6b-4fa3-9ecd-11c9a4f174b6-kube-api-access-z488g" (OuterVolumeSpecName: "kube-api-access-z488g") pod "3e0b4f98-8d6b-4fa3-9ecd-11c9a4f174b6" (UID: "3e0b4f98-8d6b-4fa3-9ecd-11c9a4f174b6"). InnerVolumeSpecName "kube-api-access-z488g". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 19 10:45:57 functional-522000 kubelet[6494]: I0819 10:45:57.239457    6494 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-z488g\" (UniqueName: \"kubernetes.io/projected/3e0b4f98-8d6b-4fa3-9ecd-11c9a4f174b6-kube-api-access-z488g\") on node \"functional-522000\" DevicePath \"\""
	Aug 19 10:45:57 functional-522000 kubelet[6494]: I0819 10:45:57.239477    6494 reconciler_common.go:288] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/3e0b4f98-8d6b-4fa3-9ecd-11c9a4f174b6-test-volume\") on node \"functional-522000\" DevicePath \"\""
	Aug 19 10:45:57 functional-522000 kubelet[6494]: I0819 10:45:57.885790    6494 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b49e2a9cb2e54f6821676ce1bf19cb3572a2510da4081b020f19a90ff7e4ff1"
	Aug 19 10:45:59 functional-522000 kubelet[6494]: I0819 10:45:59.924942    6494 scope.go:117] "RemoveContainer" containerID="774418f2be3a86f00617f53617c34e8b1b076fdf86240a6532aba7638d00b38e"
	Aug 19 10:46:00 functional-522000 kubelet[6494]: I0819 10:46:00.906070    6494 scope.go:117] "RemoveContainer" containerID="774418f2be3a86f00617f53617c34e8b1b076fdf86240a6532aba7638d00b38e"
	Aug 19 10:46:00 functional-522000 kubelet[6494]: I0819 10:46:00.906246    6494 scope.go:117] "RemoveContainer" containerID="223b065468bfc5a50c991fd3cb0e3b6a2623ad55159756cb8f83f6b1f7a7ff92"
	Aug 19 10:46:00 functional-522000 kubelet[6494]: E0819 10:46:00.906315    6494 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-29qvb_default(cb6fcc8f-7cce-4417-94d4-5eeac95617c6)\"" pod="default/hello-node-64b4f8f9ff-29qvb" podUID="cb6fcc8f-7cce-4417-94d4-5eeac95617c6"
	Aug 19 10:46:01 functional-522000 kubelet[6494]: E0819 10:46:01.201984    6494 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3e0b4f98-8d6b-4fa3-9ecd-11c9a4f174b6" containerName="mount-munger"
	Aug 19 10:46:01 functional-522000 kubelet[6494]: I0819 10:46:01.202047    6494 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e0b4f98-8d6b-4fa3-9ecd-11c9a4f174b6" containerName="mount-munger"
	Aug 19 10:46:01 functional-522000 kubelet[6494]: I0819 10:46:01.377775    6494 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e372ada9-6316-40d9-a736-6e6464c46610-tmp-volume\") pod \"dashboard-metrics-scraper-c5db448b4-8t2m2\" (UID: \"e372ada9-6316-40d9-a736-6e6464c46610\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-8t2m2"
	Aug 19 10:46:01 functional-522000 kubelet[6494]: I0819 10:46:01.377805    6494 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/292099d6-b181-4cb6-a36f-2825d0bd17f3-tmp-volume\") pod \"kubernetes-dashboard-695b96c756-9zwb4\" (UID: \"292099d6-b181-4cb6-a36f-2825d0bd17f3\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-9zwb4"
	Aug 19 10:46:01 functional-522000 kubelet[6494]: I0819 10:46:01.377816    6494 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csf5j\" (UniqueName: \"kubernetes.io/projected/292099d6-b181-4cb6-a36f-2825d0bd17f3-kube-api-access-csf5j\") pod \"kubernetes-dashboard-695b96c756-9zwb4\" (UID: \"292099d6-b181-4cb6-a36f-2825d0bd17f3\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-9zwb4"
	Aug 19 10:46:01 functional-522000 kubelet[6494]: I0819 10:46:01.377841    6494 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75f9b\" (UniqueName: \"kubernetes.io/projected/e372ada9-6316-40d9-a736-6e6464c46610-kube-api-access-75f9b\") pod \"dashboard-metrics-scraper-c5db448b4-8t2m2\" (UID: \"e372ada9-6316-40d9-a736-6e6464c46610\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-8t2m2"
	Aug 19 10:46:01 functional-522000 kubelet[6494]: I0819 10:46:01.924135    6494 scope.go:117] "RemoveContainer" containerID="c97b90bcc542a2bd94f9d19969e940305dc91fe755befd27580f96b406ff6a68"
	Aug 19 10:46:01 functional-522000 kubelet[6494]: E0819 10:46:01.924274    6494 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-h7cln_default(d702251c-ba49-424f-ace5-1d1bfdc53a30)\"" pod="default/hello-node-connect-65d86f57f4-h7cln" podUID="d702251c-ba49-424f-ace5-1d1bfdc53a30"
	
	
	==> storage-provisioner [48c612e9d3fe] <==
	I0819 10:44:57.399027       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 10:44:57.418724       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 10:44:57.418779       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 10:45:14.828630       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 10:45:14.828842       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-522000_84e16349-73e3-4e63-9958-c6a9497bb88f!
	I0819 10:45:14.828949       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2ee0db86-de0d-40f0-ad59-daa058ca1bb2", APIVersion:"v1", ResourceVersion:"637", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-522000_84e16349-73e3-4e63-9958-c6a9497bb88f became leader
	I0819 10:45:14.929884       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-522000_84e16349-73e3-4e63-9958-c6a9497bb88f!
	I0819 10:45:24.906376       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0819 10:45:24.906622       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"d6e6bb8c-818f-49e1-a0a7-12c32e5a652d", APIVersion:"v1", ResourceVersion:"684", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0819 10:45:24.906404       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    0720fe71-df09-470f-a259-c85d30deb0d2 340 0 2024-08-19 10:43:24 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-08-19 10:43:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-d6e6bb8c-818f-49e1-a0a7-12c32e5a652d &PersistentVolumeClaim{ObjectMeta:{myclaim  default  d6e6bb8c-818f-49e1-a0a7-12c32e5a652d 684 0 2024-08-19 10:45:24 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-08-19 10:45:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-08-19 10:45:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0819 10:45:24.907455       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-d6e6bb8c-818f-49e1-a0a7-12c32e5a652d" provisioned
	I0819 10:45:24.907472       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0819 10:45:24.907479       1 volume_store.go:212] Trying to save persistentvolume "pvc-d6e6bb8c-818f-49e1-a0a7-12c32e5a652d"
	I0819 10:45:24.911027       1 volume_store.go:219] persistentvolume "pvc-d6e6bb8c-818f-49e1-a0a7-12c32e5a652d" saved
	I0819 10:45:24.911180       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"d6e6bb8c-818f-49e1-a0a7-12c32e5a652d", APIVersion:"v1", ResourceVersion:"684", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-d6e6bb8c-818f-49e1-a0a7-12c32e5a652d
	
	
	==> storage-provisioner [dbcc50348397] <==
	I0819 10:44:17.131232       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 10:44:17.137246       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 10:44:17.137278       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 10:44:34.536066       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 10:44:34.536490       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2ee0db86-de0d-40f0-ad59-daa058ca1bb2", APIVersion:"v1", ResourceVersion:"525", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-522000_f5a22ffc-1b97-4cea-bd35-e6dd2ce9af99 became leader
	I0819 10:44:34.536663       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-522000_f5a22ffc-1b97-4cea-bd35-e6dd2ce9af99!
	I0819 10:44:34.637545       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-522000_f5a22ffc-1b97-4cea-bd35-e6dd2ce9af99!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-522000 -n functional-522000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-522000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount kubernetes-dashboard-695b96c756-9zwb4
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-522000 describe pod busybox-mount kubernetes-dashboard-695b96c756-9zwb4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-522000 describe pod busybox-mount kubernetes-dashboard-695b96c756-9zwb4: exit status 1 (40.712833ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-522000/192.168.105.4
	Start Time:       Mon, 19 Aug 2024 03:45:52 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  mount-munger:
	    Container ID:  docker://97a08844c7d7f3b27347fb556070495c0f8c7ff99f4f163fc99c136ca1399849
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 19 Aug 2024 03:45:54 -0700
	      Finished:     Mon, 19 Aug 2024 03:45:54 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z488g (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-z488g:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  14s   default-scheduler  Successfully assigned default/busybox-mount to functional-522000
	  Normal  Pulling    14s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     13s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.232s (1.232s including waiting). Image size: 3547125 bytes.
	  Normal  Created    13s   kubelet            Created container mount-munger
	  Normal  Started    13s   kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "kubernetes-dashboard-695b96c756-9zwb4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-522000 describe pod busybox-mount kubernetes-dashboard-695b96c756-9zwb4: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (37.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (214.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 node stop m02 -v=7 --alsologtostderr
E0819 03:50:18.700626    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/functional-522000/client.crt: no such file or directory" logger="UnhandledError"
E0819 03:50:18.708262    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/functional-522000/client.crt: no such file or directory" logger="UnhandledError"
E0819 03:50:18.721596    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/functional-522000/client.crt: no such file or directory" logger="UnhandledError"
E0819 03:50:18.743265    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/functional-522000/client.crt: no such file or directory" logger="UnhandledError"
E0819 03:50:18.786637    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/functional-522000/client.crt: no such file or directory" logger="UnhandledError"
E0819 03:50:18.868757    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/functional-522000/client.crt: no such file or directory" logger="UnhandledError"
E0819 03:50:19.032178    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/functional-522000/client.crt: no such file or directory" logger="UnhandledError"
E0819 03:50:19.355581    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/functional-522000/client.crt: no such file or directory" logger="UnhandledError"
E0819 03:50:19.997360    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/functional-522000/client.crt: no such file or directory" logger="UnhandledError"
E0819 03:50:21.280806    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/functional-522000/client.crt: no such file or directory" logger="UnhandledError"
E0819 03:50:23.844216    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/functional-522000/client.crt: no such file or directory" logger="UnhandledError"
E0819 03:50:28.967174    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/functional-522000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-927000 node stop m02 -v=7 --alsologtostderr: (12.19123575s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 status -v=7 --alsologtostderr
E0819 03:50:39.209456    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/functional-522000/client.crt: no such file or directory" logger="UnhandledError"
E0819 03:50:59.692824    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/functional-522000/client.crt: no such file or directory" logger="UnhandledError"
E0819 03:51:40.655685    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/functional-522000/client.crt: no such file or directory" logger="UnhandledError"
E0819 03:53:02.578044    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/functional-522000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-927000 status -v=7 --alsologtostderr: exit status 7 (2m55.967462375s)

                                                
                                                
-- stdout --
	ha-927000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-927000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-927000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-927000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 03:50:29.331811    2562 out.go:345] Setting OutFile to fd 1 ...
	I0819 03:50:29.331958    2562 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 03:50:29.331964    2562 out.go:358] Setting ErrFile to fd 2...
	I0819 03:50:29.331967    2562 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 03:50:29.332111    2562 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 03:50:29.332233    2562 out.go:352] Setting JSON to false
	I0819 03:50:29.332247    2562 mustload.go:65] Loading cluster: ha-927000
	I0819 03:50:29.332288    2562 notify.go:220] Checking for updates...
	I0819 03:50:29.332483    2562 config.go:182] Loaded profile config "ha-927000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 03:50:29.332490    2562 status.go:255] checking status of ha-927000 ...
	I0819 03:50:29.333207    2562 status.go:330] ha-927000 host status = "Running" (err=<nil>)
	I0819 03:50:29.333218    2562 host.go:66] Checking if "ha-927000" exists ...
	I0819 03:50:29.333332    2562 host.go:66] Checking if "ha-927000" exists ...
	I0819 03:50:29.333439    2562 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 03:50:29.333447    2562 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/ha-927000/id_rsa Username:docker}
	W0819 03:50:55.257493    2562 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0819 03:50:55.257623    2562 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0819 03:50:55.257642    2562 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0819 03:50:55.257652    2562 status.go:257] ha-927000 status: &{Name:ha-927000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0819 03:50:55.257674    2562 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0819 03:50:55.257695    2562 status.go:255] checking status of ha-927000-m02 ...
	I0819 03:50:55.258295    2562 status.go:330] ha-927000-m02 host status = "Stopped" (err=<nil>)
	I0819 03:50:55.258305    2562 status.go:343] host is not running, skipping remaining checks
	I0819 03:50:55.258311    2562 status.go:257] ha-927000-m02 status: &{Name:ha-927000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 03:50:55.258322    2562 status.go:255] checking status of ha-927000-m03 ...
	I0819 03:50:55.259539    2562 status.go:330] ha-927000-m03 host status = "Running" (err=<nil>)
	I0819 03:50:55.259550    2562 host.go:66] Checking if "ha-927000-m03" exists ...
	I0819 03:50:55.259709    2562 host.go:66] Checking if "ha-927000-m03" exists ...
	I0819 03:50:55.259837    2562 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 03:50:55.259847    2562 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/ha-927000-m03/id_rsa Username:docker}
	W0819 03:52:10.259574    2562 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0819 03:52:10.259697    2562 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0819 03:52:10.259720    2562 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0819 03:52:10.259729    2562 status.go:257] ha-927000-m03 status: &{Name:ha-927000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0819 03:52:10.259752    2562 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0819 03:52:10.259762    2562 status.go:255] checking status of ha-927000-m04 ...
	I0819 03:52:10.261226    2562 status.go:330] ha-927000-m04 host status = "Running" (err=<nil>)
	I0819 03:52:10.261241    2562 host.go:66] Checking if "ha-927000-m04" exists ...
	I0819 03:52:10.261484    2562 host.go:66] Checking if "ha-927000-m04" exists ...
	I0819 03:52:10.261801    2562 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 03:52:10.261823    2562 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/ha-927000-m04/id_rsa Username:docker}
	W0819 03:53:25.261577    2562 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0819 03:53:25.261626    2562 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0819 03:53:25.261635    2562 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0819 03:53:25.261639    2562 status.go:257] ha-927000-m04 status: &{Name:ha-927000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0819 03:53:25.261648    2562 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-927000 status -v=7 --alsologtostderr": ha-927000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-927000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-927000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-927000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-927000 status -v=7 --alsologtostderr": ha-927000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-927000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-927000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-927000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-927000 status -v=7 --alsologtostderr": ha-927000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-927000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-927000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-927000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-927000 -n ha-927000
E0819 03:53:32.803488    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/addons-758000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-927000 -n ha-927000: exit status 3 (25.964443375s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 03:53:51.226020    2615 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0819 03:53:51.226033    2615 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-927000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (214.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (103.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m17.600132083s)
ha_test.go:413: expected profile "ha-927000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-927000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-927000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-927000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docke
r\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-927000 -n ha-927000
E0819 03:55:18.696441    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/functional-522000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-927000 -n ha-927000: exit status 3 (25.964795042s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 03:55:34.786455    2650 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0819 03:55:34.786499    2650 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-927000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (103.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (208.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-927000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.107803042s)

                                                
                                                
-- stdout --
	* Starting "ha-927000-m02" control-plane node in "ha-927000" cluster
	* Restarting existing qemu2 VM for "ha-927000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-927000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 03:55:34.851099    2654 out.go:345] Setting OutFile to fd 1 ...
	I0819 03:55:34.851421    2654 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 03:55:34.851426    2654 out.go:358] Setting ErrFile to fd 2...
	I0819 03:55:34.851429    2654 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 03:55:34.851609    2654 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 03:55:34.851920    2654 mustload.go:65] Loading cluster: ha-927000
	I0819 03:55:34.852222    2654 config.go:182] Loaded profile config "ha-927000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	W0819 03:55:34.852504    2654 host.go:58] "ha-927000-m02" host status: Stopped
	I0819 03:55:34.856933    2654 out.go:177] * Starting "ha-927000-m02" control-plane node in "ha-927000" cluster
	I0819 03:55:34.859920    2654 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 03:55:34.859933    2654 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 03:55:34.859937    2654 cache.go:56] Caching tarball of preloaded images
	I0819 03:55:34.860009    2654 preload.go:172] Found /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 03:55:34.860014    2654 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 03:55:34.860083    2654 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/ha-927000/config.json ...
	I0819 03:55:34.860405    2654 start.go:360] acquireMachinesLock for ha-927000-m02: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 03:55:34.860448    2654 start.go:364] duration metric: took 29.667µs to acquireMachinesLock for "ha-927000-m02"
	I0819 03:55:34.860457    2654 start.go:96] Skipping create...Using existing machine configuration
	I0819 03:55:34.860462    2654 fix.go:54] fixHost starting: m02
	I0819 03:55:34.860595    2654 fix.go:112] recreateIfNeeded on ha-927000-m02: state=Stopped err=<nil>
	W0819 03:55:34.860604    2654 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 03:55:34.864921    2654 out.go:177] * Restarting existing qemu2 VM for "ha-927000-m02" ...
	I0819 03:55:34.868935    2654 qemu.go:418] Using hvf for hardware acceleration
	I0819 03:55:34.868996    2654 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/ha-927000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/ha-927000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/ha-927000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:29:38:4d:01:7e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/ha-927000-m02/disk.qcow2
	I0819 03:55:34.871454    2654 main.go:141] libmachine: STDOUT: 
	I0819 03:55:34.871473    2654 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 03:55:34.871503    2654 fix.go:56] duration metric: took 11.0405ms for fixHost
	I0819 03:55:34.871506    2654 start.go:83] releasing machines lock for "ha-927000-m02", held for 11.053875ms
	W0819 03:55:34.871514    2654 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 03:55:34.871545    2654 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 03:55:34.871550    2654 start.go:729] Will try again in 5 seconds ...
	I0819 03:55:39.872917    2654 start.go:360] acquireMachinesLock for ha-927000-m02: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 03:55:39.873036    2654 start.go:364] duration metric: took 95.5µs to acquireMachinesLock for "ha-927000-m02"
	I0819 03:55:39.873071    2654 start.go:96] Skipping create...Using existing machine configuration
	I0819 03:55:39.873075    2654 fix.go:54] fixHost starting: m02
	I0819 03:55:39.873245    2654 fix.go:112] recreateIfNeeded on ha-927000-m02: state=Stopped err=<nil>
	W0819 03:55:39.873250    2654 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 03:55:39.876399    2654 out.go:177] * Restarting existing qemu2 VM for "ha-927000-m02" ...
	I0819 03:55:39.880295    2654 qemu.go:418] Using hvf for hardware acceleration
	I0819 03:55:39.880371    2654 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/ha-927000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/ha-927000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/ha-927000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:29:38:4d:01:7e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/ha-927000-m02/disk.qcow2
	I0819 03:55:39.882456    2654 main.go:141] libmachine: STDOUT: 
	I0819 03:55:39.882473    2654 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 03:55:39.882492    2654 fix.go:56] duration metric: took 9.416792ms for fixHost
	I0819 03:55:39.882495    2654 start.go:83] releasing machines lock for "ha-927000-m02", held for 9.451041ms
	W0819 03:55:39.882535    2654 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-927000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-927000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 03:55:39.885283    2654 out.go:201] 
	W0819 03:55:39.888284    2654 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 03:55:39.888290    2654 out.go:270] * 
	* 
	W0819 03:55:39.890047    2654 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 03:55:39.894267    2654 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:422: I0819 03:55:34.851099    2654 out.go:345] Setting OutFile to fd 1 ...
I0819 03:55:34.851421    2654 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 03:55:34.851426    2654 out.go:358] Setting ErrFile to fd 2...
I0819 03:55:34.851429    2654 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 03:55:34.851609    2654 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
I0819 03:55:34.851920    2654 mustload.go:65] Loading cluster: ha-927000
I0819 03:55:34.852222    2654 config.go:182] Loaded profile config "ha-927000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
W0819 03:55:34.852504    2654 host.go:58] "ha-927000-m02" host status: Stopped
I0819 03:55:34.856933    2654 out.go:177] * Starting "ha-927000-m02" control-plane node in "ha-927000" cluster
I0819 03:55:34.859920    2654 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
I0819 03:55:34.859933    2654 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
I0819 03:55:34.859937    2654 cache.go:56] Caching tarball of preloaded images
I0819 03:55:34.860009    2654 preload.go:172] Found /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0819 03:55:34.860014    2654 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
I0819 03:55:34.860083    2654 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/ha-927000/config.json ...
I0819 03:55:34.860405    2654 start.go:360] acquireMachinesLock for ha-927000-m02: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0819 03:55:34.860448    2654 start.go:364] duration metric: took 29.667µs to acquireMachinesLock for "ha-927000-m02"
I0819 03:55:34.860457    2654 start.go:96] Skipping create...Using existing machine configuration
I0819 03:55:34.860462    2654 fix.go:54] fixHost starting: m02
I0819 03:55:34.860595    2654 fix.go:112] recreateIfNeeded on ha-927000-m02: state=Stopped err=<nil>
W0819 03:55:34.860604    2654 fix.go:138] unexpected machine state, will restart: <nil>
I0819 03:55:34.864921    2654 out.go:177] * Restarting existing qemu2 VM for "ha-927000-m02" ...
I0819 03:55:34.868935    2654 qemu.go:418] Using hvf for hardware acceleration
I0819 03:55:34.868996    2654 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/ha-927000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/ha-927000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/ha-927000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:29:38:4d:01:7e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/ha-927000-m02/disk.qcow2
I0819 03:55:34.871454    2654 main.go:141] libmachine: STDOUT: 
I0819 03:55:34.871473    2654 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0819 03:55:34.871503    2654 fix.go:56] duration metric: took 11.0405ms for fixHost
I0819 03:55:34.871506    2654 start.go:83] releasing machines lock for "ha-927000-m02", held for 11.053875ms
W0819 03:55:34.871514    2654 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0819 03:55:34.871545    2654 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0819 03:55:34.871550    2654 start.go:729] Will try again in 5 seconds ...
I0819 03:55:39.872917    2654 start.go:360] acquireMachinesLock for ha-927000-m02: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0819 03:55:39.873036    2654 start.go:364] duration metric: took 95.5µs to acquireMachinesLock for "ha-927000-m02"
I0819 03:55:39.873071    2654 start.go:96] Skipping create...Using existing machine configuration
I0819 03:55:39.873075    2654 fix.go:54] fixHost starting: m02
I0819 03:55:39.873245    2654 fix.go:112] recreateIfNeeded on ha-927000-m02: state=Stopped err=<nil>
W0819 03:55:39.873250    2654 fix.go:138] unexpected machine state, will restart: <nil>
I0819 03:55:39.876399    2654 out.go:177] * Restarting existing qemu2 VM for "ha-927000-m02" ...
I0819 03:55:39.880295    2654 qemu.go:418] Using hvf for hardware acceleration
I0819 03:55:39.880371    2654 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/ha-927000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/ha-927000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/ha-927000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:29:38:4d:01:7e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/ha-927000-m02/disk.qcow2
I0819 03:55:39.882456    2654 main.go:141] libmachine: STDOUT: 
I0819 03:55:39.882473    2654 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0819 03:55:39.882492    2654 fix.go:56] duration metric: took 9.416792ms for fixHost
I0819 03:55:39.882495    2654 start.go:83] releasing machines lock for "ha-927000-m02", held for 9.451041ms
W0819 03:55:39.882535    2654 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-927000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-927000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0819 03:55:39.885283    2654 out.go:201] 
W0819 03:55:39.888284    2654 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0819 03:55:39.888290    2654 out.go:270] * 
* 
W0819 03:55:39.890047    2654 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0819 03:55:39.894267    2654 out.go:201] 

                                                
                                                
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-927000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 status -v=7 --alsologtostderr
E0819 03:55:46.419305    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/functional-522000/client.crt: no such file or directory" logger="UnhandledError"
E0819 03:58:32.799743    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/addons-758000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-927000 status -v=7 --alsologtostderr: exit status 7 (2m57.598713583s)

                                                
                                                
-- stdout --
	ha-927000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-927000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-927000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-927000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 03:55:39.931511    2658 out.go:345] Setting OutFile to fd 1 ...
	I0819 03:55:39.931687    2658 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 03:55:39.931691    2658 out.go:358] Setting ErrFile to fd 2...
	I0819 03:55:39.931693    2658 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 03:55:39.931851    2658 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 03:55:39.931989    2658 out.go:352] Setting JSON to false
	I0819 03:55:39.932002    2658 mustload.go:65] Loading cluster: ha-927000
	I0819 03:55:39.932075    2658 notify.go:220] Checking for updates...
	I0819 03:55:39.932238    2658 config.go:182] Loaded profile config "ha-927000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 03:55:39.932245    2658 status.go:255] checking status of ha-927000 ...
	I0819 03:55:39.932938    2658 status.go:330] ha-927000 host status = "Running" (err=<nil>)
	I0819 03:55:39.932948    2658 host.go:66] Checking if "ha-927000" exists ...
	I0819 03:55:39.933041    2658 host.go:66] Checking if "ha-927000" exists ...
	I0819 03:55:39.933162    2658 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 03:55:39.933174    2658 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/ha-927000/id_rsa Username:docker}
	W0819 03:55:39.933368    2658 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0819 03:55:39.933388    2658 retry.go:31] will retry after 182.946462ms: dial tcp 192.168.105.5:22: connect: host is down
	W0819 03:55:40.118461    2658 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0819 03:55:40.118481    2658 retry.go:31] will retry after 523.700313ms: dial tcp 192.168.105.5:22: connect: host is down
	W0819 03:55:40.643111    2658 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0819 03:55:40.643138    2658 retry.go:31] will retry after 450.294663ms: dial tcp 192.168.105.5:22: connect: host is down
	W0819 03:55:41.095605    2658 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0819 03:55:41.095674    2658 retry.go:31] will retry after 255.229868ms: new client: new client: dial tcp 192.168.105.5:22: connect: host is down
	I0819 03:55:41.351530    2658 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/ha-927000/id_rsa Username:docker}
	W0819 03:55:41.351827    2658 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0819 03:55:41.351840    2658 retry.go:31] will retry after 208.252775ms: dial tcp 192.168.105.5:22: connect: host is down
	W0819 03:56:07.485236    2658 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0819 03:56:07.485373    2658 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0819 03:56:07.485396    2658 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0819 03:56:07.485400    2658 status.go:257] ha-927000 status: &{Name:ha-927000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0819 03:56:07.485414    2658 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0819 03:56:07.485418    2658 status.go:255] checking status of ha-927000-m02 ...
	I0819 03:56:07.485640    2658 status.go:330] ha-927000-m02 host status = "Stopped" (err=<nil>)
	I0819 03:56:07.485646    2658 status.go:343] host is not running, skipping remaining checks
	I0819 03:56:07.485648    2658 status.go:257] ha-927000-m02 status: &{Name:ha-927000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 03:56:07.485652    2658 status.go:255] checking status of ha-927000-m03 ...
	I0819 03:56:07.486287    2658 status.go:330] ha-927000-m03 host status = "Running" (err=<nil>)
	I0819 03:56:07.486296    2658 host.go:66] Checking if "ha-927000-m03" exists ...
	I0819 03:56:07.486389    2658 host.go:66] Checking if "ha-927000-m03" exists ...
	I0819 03:56:07.486506    2658 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 03:56:07.486516    2658 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/ha-927000-m03/id_rsa Username:docker}
	W0819 03:57:22.488548    2658 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0819 03:57:22.488599    2658 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0819 03:57:22.488608    2658 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0819 03:57:22.488626    2658 status.go:257] ha-927000-m03 status: &{Name:ha-927000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0819 03:57:22.488637    2658 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0819 03:57:22.488641    2658 status.go:255] checking status of ha-927000-m04 ...
	I0819 03:57:22.489420    2658 status.go:330] ha-927000-m04 host status = "Running" (err=<nil>)
	I0819 03:57:22.489428    2658 host.go:66] Checking if "ha-927000-m04" exists ...
	I0819 03:57:22.489525    2658 host.go:66] Checking if "ha-927000-m04" exists ...
	I0819 03:57:22.489651    2658 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 03:57:22.489659    2658 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/ha-927000-m04/id_rsa Username:docker}
	W0819 03:58:37.491066    2658 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0819 03:58:37.491229    2658 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0819 03:58:37.491264    2658 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0819 03:58:37.491278    2658 status.go:257] ha-927000-m04 status: &{Name:ha-927000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0819 03:58:37.491316    2658 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-927000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-927000 -n ha-927000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-927000 -n ha-927000: exit status 3 (25.986003375s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 03:59:03.478345    2700 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0819 03:59:03.478372    2700 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-927000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (208.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-927000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-927000 -v=7 --alsologtostderr
E0819 04:03:32.795186    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/addons-758000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-927000 -v=7 --alsologtostderr: (3m49.000075917s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-927000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-927000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.230190875s)

                                                
                                                
-- stdout --
	* [ha-927000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-927000" primary control-plane node in "ha-927000" cluster
	* Restarting existing qemu2 VM for "ha-927000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-927000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:04:10.490816    3097 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:04:10.490987    3097 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:04:10.490992    3097 out.go:358] Setting ErrFile to fd 2...
	I0819 04:04:10.490995    3097 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:04:10.491149    3097 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:04:10.492461    3097 out.go:352] Setting JSON to false
	I0819 04:04:10.512224    3097 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2013,"bootTime":1724063437,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0819 04:04:10.512291    3097 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:04:10.517874    3097 out.go:177] * [ha-927000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:04:10.525862    3097 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 04:04:10.525917    3097 notify.go:220] Checking for updates...
	I0819 04:04:10.533784    3097 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	I0819 04:04:10.537838    3097 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:04:10.540858    3097 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:04:10.543821    3097 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	I0819 04:04:10.546851    3097 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:04:10.550235    3097 config.go:182] Loaded profile config "ha-927000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:04:10.550288    3097 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:04:10.554796    3097 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 04:04:10.561885    3097 start.go:297] selected driver: qemu2
	I0819 04:04:10.561892    3097 start.go:901] validating driver "qemu2" against &{Name:ha-927000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.0 ClusterName:ha-927000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:04:10.561977    3097 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:04:10.564384    3097 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 04:04:10.564430    3097 cni.go:84] Creating CNI manager for ""
	I0819 04:04:10.564435    3097 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0819 04:04:10.564488    3097 start.go:340] cluster config:
	{Name:ha-927000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-927000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:04:10.568524    3097 iso.go:125] acquiring lock: {Name:mk9bbf20f477d4c64990a7e4e7281f35cf7cfcc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:04:10.576868    3097 out.go:177] * Starting "ha-927000" primary control-plane node in "ha-927000" cluster
	I0819 04:04:10.581788    3097 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:04:10.581804    3097 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:04:10.581813    3097 cache.go:56] Caching tarball of preloaded images
	I0819 04:04:10.581867    3097 preload.go:172] Found /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:04:10.581872    3097 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:04:10.581947    3097 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/ha-927000/config.json ...
	I0819 04:04:10.582283    3097 start.go:360] acquireMachinesLock for ha-927000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:04:10.582321    3097 start.go:364] duration metric: took 30.833µs to acquireMachinesLock for "ha-927000"
	I0819 04:04:10.582331    3097 start.go:96] Skipping create...Using existing machine configuration
	I0819 04:04:10.582336    3097 fix.go:54] fixHost starting: 
	I0819 04:04:10.582465    3097 fix.go:112] recreateIfNeeded on ha-927000: state=Stopped err=<nil>
	W0819 04:04:10.582472    3097 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 04:04:10.586849    3097 out.go:177] * Restarting existing qemu2 VM for "ha-927000" ...
	I0819 04:04:10.594825    3097 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:04:10.594860    3097 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/ha-927000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/ha-927000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/ha-927000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:c2:71:37:d8:2e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/ha-927000/disk.qcow2
	I0819 04:04:10.596964    3097 main.go:141] libmachine: STDOUT: 
	I0819 04:04:10.596988    3097 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:04:10.597020    3097 fix.go:56] duration metric: took 14.685ms for fixHost
	I0819 04:04:10.597024    3097 start.go:83] releasing machines lock for "ha-927000", held for 14.699208ms
	W0819 04:04:10.597032    3097 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:04:10.597075    3097 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:04:10.597080    3097 start.go:729] Will try again in 5 seconds ...
	I0819 04:04:15.599216    3097 start.go:360] acquireMachinesLock for ha-927000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:04:15.599744    3097 start.go:364] duration metric: took 412.25µs to acquireMachinesLock for "ha-927000"
	I0819 04:04:15.599891    3097 start.go:96] Skipping create...Using existing machine configuration
	I0819 04:04:15.599912    3097 fix.go:54] fixHost starting: 
	I0819 04:04:15.600621    3097 fix.go:112] recreateIfNeeded on ha-927000: state=Stopped err=<nil>
	W0819 04:04:15.600648    3097 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 04:04:15.605115    3097 out.go:177] * Restarting existing qemu2 VM for "ha-927000" ...
	I0819 04:04:15.612987    3097 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:04:15.613225    3097 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/ha-927000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/ha-927000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/ha-927000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:c2:71:37:d8:2e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/ha-927000/disk.qcow2
	I0819 04:04:15.622815    3097 main.go:141] libmachine: STDOUT: 
	I0819 04:04:15.622901    3097 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:04:15.622999    3097 fix.go:56] duration metric: took 23.08575ms for fixHost
	I0819 04:04:15.623016    3097 start.go:83] releasing machines lock for "ha-927000", held for 23.245125ms
	W0819 04:04:15.623209    3097 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-927000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-927000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:04:15.632071    3097 out.go:201] 
	W0819 04:04:15.636199    3097 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:04:15.636230    3097 out.go:270] * 
	* 
	W0819 04:04:15.639201    3097 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:04:15.645095    3097 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-927000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-927000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-927000 -n ha-927000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-927000 -n ha-927000: exit status 7 (33.470666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-927000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-927000 node delete m03 -v=7 --alsologtostderr: exit status 83 (40.862125ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-927000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-927000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:04:15.790864    3110 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:04:15.791087    3110 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:04:15.791091    3110 out.go:358] Setting ErrFile to fd 2...
	I0819 04:04:15.791093    3110 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:04:15.791228    3110 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:04:15.791446    3110 mustload.go:65] Loading cluster: ha-927000
	I0819 04:04:15.791651    3110 config.go:182] Loaded profile config "ha-927000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	W0819 04:04:15.791940    3110 out.go:270] ! The control-plane node ha-927000 host is not running (will try others): state=Stopped
	! The control-plane node ha-927000 host is not running (will try others): state=Stopped
	W0819 04:04:15.792049    3110 out.go:270] ! The control-plane node ha-927000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-927000-m02 host is not running (will try others): state=Stopped
	I0819 04:04:15.796164    3110 out.go:177] * The control-plane node ha-927000-m03 host is not running: state=Stopped
	I0819 04:04:15.799054    3110 out.go:177]   To start a cluster, run: "minikube start -p ha-927000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-927000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-927000 status -v=7 --alsologtostderr: exit status 7 (29.539292ms)

                                                
                                                
-- stdout --
	ha-927000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-927000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-927000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-927000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:04:15.830356    3112 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:04:15.830541    3112 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:04:15.830544    3112 out.go:358] Setting ErrFile to fd 2...
	I0819 04:04:15.830546    3112 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:04:15.830673    3112 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:04:15.830798    3112 out.go:352] Setting JSON to false
	I0819 04:04:15.830810    3112 mustload.go:65] Loading cluster: ha-927000
	I0819 04:04:15.830874    3112 notify.go:220] Checking for updates...
	I0819 04:04:15.831054    3112 config.go:182] Loaded profile config "ha-927000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:04:15.831059    3112 status.go:255] checking status of ha-927000 ...
	I0819 04:04:15.831261    3112 status.go:330] ha-927000 host status = "Stopped" (err=<nil>)
	I0819 04:04:15.831264    3112 status.go:343] host is not running, skipping remaining checks
	I0819 04:04:15.831267    3112 status.go:257] ha-927000 status: &{Name:ha-927000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 04:04:15.831278    3112 status.go:255] checking status of ha-927000-m02 ...
	I0819 04:04:15.831363    3112 status.go:330] ha-927000-m02 host status = "Stopped" (err=<nil>)
	I0819 04:04:15.831366    3112 status.go:343] host is not running, skipping remaining checks
	I0819 04:04:15.831368    3112 status.go:257] ha-927000-m02 status: &{Name:ha-927000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 04:04:15.831375    3112 status.go:255] checking status of ha-927000-m03 ...
	I0819 04:04:15.831459    3112 status.go:330] ha-927000-m03 host status = "Stopped" (err=<nil>)
	I0819 04:04:15.831461    3112 status.go:343] host is not running, skipping remaining checks
	I0819 04:04:15.831465    3112 status.go:257] ha-927000-m03 status: &{Name:ha-927000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 04:04:15.831469    3112 status.go:255] checking status of ha-927000-m04 ...
	I0819 04:04:15.831559    3112 status.go:330] ha-927000-m04 host status = "Stopped" (err=<nil>)
	I0819 04:04:15.831562    3112 status.go:343] host is not running, skipping remaining checks
	I0819 04:04:15.831564    3112 status.go:257] ha-927000-m04 status: &{Name:ha-927000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-927000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-927000 -n ha-927000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-927000 -n ha-927000: exit status 7 (29.927625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-927000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-927000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-927000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-927000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-927000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-927000 -n ha-927000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-927000 -n ha-927000: exit status 7 (29.323875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-927000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (202.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 stop -v=7 --alsologtostderr
E0819 04:05:18.688590    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/functional-522000/client.crt: no such file or directory" logger="UnhandledError"
E0819 04:06:41.774037    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/functional-522000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-927000 stop -v=7 --alsologtostderr: (3m21.9661245s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-927000 status -v=7 --alsologtostderr: exit status 7 (66.733458ms)

                                                
                                                
-- stdout --
	ha-927000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-927000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-927000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-927000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:07:37.968071    3147 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:07:37.968273    3147 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:07:37.968277    3147 out.go:358] Setting ErrFile to fd 2...
	I0819 04:07:37.968281    3147 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:07:37.968460    3147 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:07:37.968639    3147 out.go:352] Setting JSON to false
	I0819 04:07:37.968654    3147 mustload.go:65] Loading cluster: ha-927000
	I0819 04:07:37.968679    3147 notify.go:220] Checking for updates...
	I0819 04:07:37.968978    3147 config.go:182] Loaded profile config "ha-927000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:07:37.968985    3147 status.go:255] checking status of ha-927000 ...
	I0819 04:07:37.969253    3147 status.go:330] ha-927000 host status = "Stopped" (err=<nil>)
	I0819 04:07:37.969258    3147 status.go:343] host is not running, skipping remaining checks
	I0819 04:07:37.969261    3147 status.go:257] ha-927000 status: &{Name:ha-927000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 04:07:37.969274    3147 status.go:255] checking status of ha-927000-m02 ...
	I0819 04:07:37.969402    3147 status.go:330] ha-927000-m02 host status = "Stopped" (err=<nil>)
	I0819 04:07:37.969407    3147 status.go:343] host is not running, skipping remaining checks
	I0819 04:07:37.969410    3147 status.go:257] ha-927000-m02 status: &{Name:ha-927000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 04:07:37.969415    3147 status.go:255] checking status of ha-927000-m03 ...
	I0819 04:07:37.969540    3147 status.go:330] ha-927000-m03 host status = "Stopped" (err=<nil>)
	I0819 04:07:37.969545    3147 status.go:343] host is not running, skipping remaining checks
	I0819 04:07:37.969547    3147 status.go:257] ha-927000-m03 status: &{Name:ha-927000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 04:07:37.969556    3147 status.go:255] checking status of ha-927000-m04 ...
	I0819 04:07:37.969686    3147 status.go:330] ha-927000-m04 host status = "Stopped" (err=<nil>)
	I0819 04:07:37.969690    3147 status.go:343] host is not running, skipping remaining checks
	I0819 04:07:37.969692    3147 status.go:257] ha-927000-m04 status: &{Name:ha-927000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-927000 status -v=7 --alsologtostderr": ha-927000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-927000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-927000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-927000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-927000 status -v=7 --alsologtostderr": ha-927000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-927000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-927000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-927000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-927000 status -v=7 --alsologtostderr": ha-927000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-927000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-927000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-927000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-927000 -n ha-927000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-927000 -n ha-927000: exit status 7 (32.477417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-927000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (202.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-927000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-927000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.182121333s)

                                                
                                                
-- stdout --
	* [ha-927000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-927000" primary control-plane node in "ha-927000" cluster
	* Restarting existing qemu2 VM for "ha-927000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-927000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:07:38.031298    3151 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:07:38.031417    3151 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:07:38.031420    3151 out.go:358] Setting ErrFile to fd 2...
	I0819 04:07:38.031423    3151 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:07:38.031558    3151 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:07:38.032516    3151 out.go:352] Setting JSON to false
	I0819 04:07:38.048545    3151 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2221,"bootTime":1724063437,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0819 04:07:38.048613    3151 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:07:38.053759    3151 out.go:177] * [ha-927000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:07:38.061655    3151 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 04:07:38.061700    3151 notify.go:220] Checking for updates...
	I0819 04:07:38.069536    3151 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	I0819 04:07:38.073661    3151 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:07:38.076641    3151 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:07:38.080630    3151 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	I0819 04:07:38.083675    3151 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:07:38.086903    3151 config.go:182] Loaded profile config "ha-927000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:07:38.087175    3151 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:07:38.091586    3151 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 04:07:38.098685    3151 start.go:297] selected driver: qemu2
	I0819 04:07:38.098690    3151 start.go:901] validating driver "qemu2" against &{Name:ha-927000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.0 ClusterName:ha-927000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:07:38.098755    3151 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:07:38.100999    3151 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 04:07:38.101044    3151 cni.go:84] Creating CNI manager for ""
	I0819 04:07:38.101049    3151 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0819 04:07:38.101115    3151 start.go:340] cluster config:
	{Name:ha-927000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-927000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:07:38.104552    3151 iso.go:125] acquiring lock: {Name:mk9bbf20f477d4c64990a7e4e7281f35cf7cfcc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:07:38.112656    3151 out.go:177] * Starting "ha-927000" primary control-plane node in "ha-927000" cluster
	I0819 04:07:38.116606    3151 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:07:38.116620    3151 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:07:38.116629    3151 cache.go:56] Caching tarball of preloaded images
	I0819 04:07:38.116684    3151 preload.go:172] Found /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:07:38.116690    3151 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:07:38.116761    3151 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/ha-927000/config.json ...
	I0819 04:07:38.117255    3151 start.go:360] acquireMachinesLock for ha-927000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:07:38.117302    3151 start.go:364] duration metric: took 38.5µs to acquireMachinesLock for "ha-927000"
	I0819 04:07:38.117314    3151 start.go:96] Skipping create...Using existing machine configuration
	I0819 04:07:38.117319    3151 fix.go:54] fixHost starting: 
	I0819 04:07:38.117453    3151 fix.go:112] recreateIfNeeded on ha-927000: state=Stopped err=<nil>
	W0819 04:07:38.117462    3151 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 04:07:38.120675    3151 out.go:177] * Restarting existing qemu2 VM for "ha-927000" ...
	I0819 04:07:38.128609    3151 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:07:38.128648    3151 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/ha-927000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/ha-927000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/ha-927000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:c2:71:37:d8:2e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/ha-927000/disk.qcow2
	I0819 04:07:38.130737    3151 main.go:141] libmachine: STDOUT: 
	I0819 04:07:38.130763    3151 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:07:38.130793    3151 fix.go:56] duration metric: took 13.473334ms for fixHost
	I0819 04:07:38.130798    3151 start.go:83] releasing machines lock for "ha-927000", held for 13.491541ms
	W0819 04:07:38.130806    3151 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:07:38.130845    3151 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:07:38.130851    3151 start.go:729] Will try again in 5 seconds ...
	I0819 04:07:43.133000    3151 start.go:360] acquireMachinesLock for ha-927000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:07:43.133508    3151 start.go:364] duration metric: took 362.333µs to acquireMachinesLock for "ha-927000"
	I0819 04:07:43.133682    3151 start.go:96] Skipping create...Using existing machine configuration
	I0819 04:07:43.133700    3151 fix.go:54] fixHost starting: 
	I0819 04:07:43.134410    3151 fix.go:112] recreateIfNeeded on ha-927000: state=Stopped err=<nil>
	W0819 04:07:43.134433    3151 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 04:07:43.138895    3151 out.go:177] * Restarting existing qemu2 VM for "ha-927000" ...
	I0819 04:07:43.141784    3151 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:07:43.142010    3151 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/ha-927000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/ha-927000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/ha-927000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:c2:71:37:d8:2e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/ha-927000/disk.qcow2
	I0819 04:07:43.150920    3151 main.go:141] libmachine: STDOUT: 
	I0819 04:07:43.151009    3151 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:07:43.151100    3151 fix.go:56] duration metric: took 17.396834ms for fixHost
	I0819 04:07:43.151122    3151 start.go:83] releasing machines lock for "ha-927000", held for 17.554625ms
	W0819 04:07:43.151319    3151 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-927000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-927000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:07:43.158845    3151 out.go:201] 
	W0819 04:07:43.162844    3151 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:07:43.162900    3151 out.go:270] * 
	* 
	W0819 04:07:43.165668    3151 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:07:43.177830    3151 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-927000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-927000 -n ha-927000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-927000 -n ha-927000: exit status 7 (68.167083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-927000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-927000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-927000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-927000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-927000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-927000 -n ha-927000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-927000 -n ha-927000: exit status 7 (29.008333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-927000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-927000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-927000 --control-plane -v=7 --alsologtostderr: exit status 83 (42.709208ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-927000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-927000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:07:43.362154    3166 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:07:43.362294    3166 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:07:43.362302    3166 out.go:358] Setting ErrFile to fd 2...
	I0819 04:07:43.362304    3166 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:07:43.362438    3166 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:07:43.362678    3166 mustload.go:65] Loading cluster: ha-927000
	I0819 04:07:43.362883    3166 config.go:182] Loaded profile config "ha-927000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	W0819 04:07:43.363186    3166 out.go:270] ! The control-plane node ha-927000 host is not running (will try others): state=Stopped
	! The control-plane node ha-927000 host is not running (will try others): state=Stopped
	W0819 04:07:43.363282    3166 out.go:270] ! The control-plane node ha-927000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-927000-m02 host is not running (will try others): state=Stopped
	I0819 04:07:43.367727    3166 out.go:177] * The control-plane node ha-927000-m03 host is not running: state=Stopped
	I0819 04:07:43.371661    3166 out.go:177]   To start a cluster, run: "minikube start -p ha-927000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-927000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-927000 -n ha-927000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-927000 -n ha-927000: exit status 7 (30.071208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-927000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.1s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-686000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-686000 --driver=qemu2 : exit status 80 (10.028446292s)

                                                
                                                
-- stdout --
	* [image-686000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-686000" primary control-plane node in "image-686000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-686000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-686000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-686000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-686000 -n image-686000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-686000 -n image-686000: exit status 7 (66.653791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-686000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.10s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.77s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-210000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-210000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.768642667s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"66d9e99c-736f-45cb-9879-a95582847c11","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-210000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9d3e08c4-a0fe-4961-80a5-2a4869206578","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19476"}}
	{"specversion":"1.0","id":"00a49512-f236-4415-8302-35e6afbf1f62","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig"}}
	{"specversion":"1.0","id":"a4166983-3bf6-4211-b20a-16a0cedb831c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"27b9c768-40df-445a-bbe3-435f427bac49","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"933d825c-c4ae-43b8-8caa-5489273573b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube"}}
	{"specversion":"1.0","id":"3aeeb413-fdb3-49d7-a87c-80817ed30352","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"85cdd1f6-19b5-4b48-943f-b0e2009359ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"11d1da7d-7a79-4a80-8ae2-dc4d358788a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"19a08852-79ba-4f8a-bf14-cb84e376d406","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-210000\" primary control-plane node in \"json-output-210000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"09f1dc51-52bd-4a99-9d5b-7cd3823e6e0c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"66f91c17-c08b-4d9b-89ff-48c3d28fa77d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-210000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"de172709-0a6d-42db-abd5-04434a908ad3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"17b0bba5-b648-4a38-a907-3a3318cc2bb6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"d66595b3-fe4c-434d-b88a-647fbcc35f83","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-210000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"738c6a96-c9f5-4ca8-bd0b-cbb0c96b08e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"24337903-1f6a-4eaa-a5be-3da7e455ea67","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-210000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.77s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-210000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-210000 --output=json --user=testUser: exit status 83 (77.859167ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"31f7df5d-48a1-4a46-8d77-5a753fbf5a1d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-210000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"f2b66d1b-0c84-4057-84cc-6b7699356b40","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-210000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-210000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.04s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-210000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-210000 --output=json --user=testUser: exit status 83 (44.240833ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-210000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-210000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-210000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-210000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (10.11s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-996000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-996000 --driver=qemu2 : exit status 80 (9.806748792s)

                                                
                                                
-- stdout --
	* [first-996000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-996000" primary control-plane node in "first-996000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-996000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-996000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-996000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-08-19 04:08:17.319265 -0700 PDT m=+2005.536384001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-998000 -n second-998000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-998000 -n second-998000: exit status 85 (81.111666ms)

                                                
                                                
-- stdout --
	* Profile "second-998000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-998000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-998000" host is not running, skipping log retrieval (state="* Profile \"second-998000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-998000\"")
helpers_test.go:175: Cleaning up "second-998000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-998000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-08-19 04:08:17.508423 -0700 PDT m=+2005.725544543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-996000 -n first-996000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-996000 -n first-996000: exit status 7 (31.44725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-996000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-996000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-996000
--- FAIL: TestMinikubeProfile (10.11s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.95s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-522000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-522000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.879435958s)

                                                
                                                
-- stdout --
	* [mount-start-1-522000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-522000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-522000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-522000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-522000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-522000 -n mount-start-1-522000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-522000 -n mount-start-1-522000: exit status 7 (70.228292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-522000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (9.95s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-837000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
E0819 04:08:32.791601    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/addons-758000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-837000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.865235209s)

                                                
                                                
-- stdout --
	* [multinode-837000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-837000" primary control-plane node in "multinode-837000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-837000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:08:27.778291    3311 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:08:27.778409    3311 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:08:27.778413    3311 out.go:358] Setting ErrFile to fd 2...
	I0819 04:08:27.778415    3311 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:08:27.778536    3311 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:08:27.779605    3311 out.go:352] Setting JSON to false
	I0819 04:08:27.795706    3311 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2270,"bootTime":1724063437,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0819 04:08:27.795781    3311 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:08:27.803902    3311 out.go:177] * [multinode-837000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:08:27.811917    3311 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 04:08:27.811959    3311 notify.go:220] Checking for updates...
	I0819 04:08:27.819914    3311 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	I0819 04:08:27.827881    3311 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:08:27.837843    3311 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:08:27.841871    3311 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	I0819 04:08:27.845894    3311 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:08:27.850022    3311 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:08:27.853717    3311 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 04:08:27.860901    3311 start.go:297] selected driver: qemu2
	I0819 04:08:27.860907    3311 start.go:901] validating driver "qemu2" against <nil>
	I0819 04:08:27.860913    3311 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:08:27.863512    3311 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 04:08:27.866932    3311 out.go:177] * Automatically selected the socket_vmnet network
	I0819 04:08:27.870008    3311 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 04:08:27.870040    3311 cni.go:84] Creating CNI manager for ""
	I0819 04:08:27.870045    3311 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0819 04:08:27.870050    3311 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 04:08:27.870090    3311 start.go:340] cluster config:
	{Name:multinode-837000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-837000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:08:27.874431    3311 iso.go:125] acquiring lock: {Name:mk9bbf20f477d4c64990a7e4e7281f35cf7cfcc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:08:27.880863    3311 out.go:177] * Starting "multinode-837000" primary control-plane node in "multinode-837000" cluster
	I0819 04:08:27.884847    3311 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:08:27.884863    3311 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:08:27.884871    3311 cache.go:56] Caching tarball of preloaded images
	I0819 04:08:27.884928    3311 preload.go:172] Found /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:08:27.884934    3311 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:08:27.885138    3311 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/multinode-837000/config.json ...
	I0819 04:08:27.885154    3311 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/multinode-837000/config.json: {Name:mk5353f02095964937497830b6f68b2be9eee2e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:08:27.885389    3311 start.go:360] acquireMachinesLock for multinode-837000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:08:27.885425    3311 start.go:364] duration metric: took 30µs to acquireMachinesLock for "multinode-837000"
	I0819 04:08:27.885441    3311 start.go:93] Provisioning new machine with config: &{Name:multinode-837000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0 ClusterName:multinode-837000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:08:27.885474    3311 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:08:27.892901    3311 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 04:08:27.911429    3311 start.go:159] libmachine.API.Create for "multinode-837000" (driver="qemu2")
	I0819 04:08:27.911464    3311 client.go:168] LocalClient.Create starting
	I0819 04:08:27.911522    3311 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem
	I0819 04:08:27.911553    3311 main.go:141] libmachine: Decoding PEM data...
	I0819 04:08:27.911565    3311 main.go:141] libmachine: Parsing certificate...
	I0819 04:08:27.911609    3311 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/cert.pem
	I0819 04:08:27.911635    3311 main.go:141] libmachine: Decoding PEM data...
	I0819 04:08:27.911642    3311 main.go:141] libmachine: Parsing certificate...
	I0819 04:08:27.912002    3311 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19476-967/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19476-967/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:08:28.060896    3311 main.go:141] libmachine: Creating SSH key...
	I0819 04:08:28.222554    3311 main.go:141] libmachine: Creating Disk image...
	I0819 04:08:28.222560    3311 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:08:28.222800    3311 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19476-967/.minikube/machines/multinode-837000/disk.qcow2.raw /Users/jenkins/minikube-integration/19476-967/.minikube/machines/multinode-837000/disk.qcow2
	I0819 04:08:28.232500    3311 main.go:141] libmachine: STDOUT: 
	I0819 04:08:28.232520    3311 main.go:141] libmachine: STDERR: 
	I0819 04:08:28.232573    3311 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/multinode-837000/disk.qcow2 +20000M
	I0819 04:08:28.240524    3311 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:08:28.240545    3311 main.go:141] libmachine: STDERR: 
	I0819 04:08:28.240558    3311 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19476-967/.minikube/machines/multinode-837000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19476-967/.minikube/machines/multinode-837000/disk.qcow2
	I0819 04:08:28.240570    3311 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:08:28.240580    3311 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:08:28.240607    3311 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/multinode-837000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/multinode-837000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/multinode-837000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:21:68:13:1c:43 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/multinode-837000/disk.qcow2
	I0819 04:08:28.242238    3311 main.go:141] libmachine: STDOUT: 
	I0819 04:08:28.242254    3311 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:08:28.242271    3311 client.go:171] duration metric: took 330.806416ms to LocalClient.Create
	I0819 04:08:30.244429    3311 start.go:128] duration metric: took 2.358954209s to createHost
	I0819 04:08:30.244493    3311 start.go:83] releasing machines lock for "multinode-837000", held for 2.359089958s
	W0819 04:08:30.244544    3311 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:08:30.251980    3311 out.go:177] * Deleting "multinode-837000" in qemu2 ...
	W0819 04:08:30.279932    3311 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:08:30.279952    3311 start.go:729] Will try again in 5 seconds ...
	I0819 04:08:35.282124    3311 start.go:360] acquireMachinesLock for multinode-837000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:08:35.282642    3311 start.go:364] duration metric: took 421.417µs to acquireMachinesLock for "multinode-837000"
	I0819 04:08:35.282809    3311 start.go:93] Provisioning new machine with config: &{Name:multinode-837000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0 ClusterName:multinode-837000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:08:35.283156    3311 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:08:35.293616    3311 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 04:08:35.344970    3311 start.go:159] libmachine.API.Create for "multinode-837000" (driver="qemu2")
	I0819 04:08:35.345018    3311 client.go:168] LocalClient.Create starting
	I0819 04:08:35.345149    3311 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem
	I0819 04:08:35.345219    3311 main.go:141] libmachine: Decoding PEM data...
	I0819 04:08:35.345235    3311 main.go:141] libmachine: Parsing certificate...
	I0819 04:08:35.345297    3311 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/cert.pem
	I0819 04:08:35.345341    3311 main.go:141] libmachine: Decoding PEM data...
	I0819 04:08:35.345354    3311 main.go:141] libmachine: Parsing certificate...
	I0819 04:08:35.345909    3311 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19476-967/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19476-967/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:08:35.505026    3311 main.go:141] libmachine: Creating SSH key...
	I0819 04:08:35.543829    3311 main.go:141] libmachine: Creating Disk image...
	I0819 04:08:35.543834    3311 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:08:35.544017    3311 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19476-967/.minikube/machines/multinode-837000/disk.qcow2.raw /Users/jenkins/minikube-integration/19476-967/.minikube/machines/multinode-837000/disk.qcow2
	I0819 04:08:35.553075    3311 main.go:141] libmachine: STDOUT: 
	I0819 04:08:35.553096    3311 main.go:141] libmachine: STDERR: 
	I0819 04:08:35.553146    3311 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/multinode-837000/disk.qcow2 +20000M
	I0819 04:08:35.560980    3311 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:08:35.561000    3311 main.go:141] libmachine: STDERR: 
	I0819 04:08:35.561009    3311 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19476-967/.minikube/machines/multinode-837000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19476-967/.minikube/machines/multinode-837000/disk.qcow2
	I0819 04:08:35.561012    3311 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:08:35.561027    3311 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:08:35.561055    3311 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/multinode-837000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/multinode-837000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/multinode-837000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:14:2b:ce:27:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/multinode-837000/disk.qcow2
	I0819 04:08:35.562691    3311 main.go:141] libmachine: STDOUT: 
	I0819 04:08:35.562709    3311 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:08:35.562720    3311 client.go:171] duration metric: took 217.698875ms to LocalClient.Create
	I0819 04:08:37.564882    3311 start.go:128] duration metric: took 2.281704708s to createHost
	I0819 04:08:37.564957    3311 start.go:83] releasing machines lock for "multinode-837000", held for 2.282291041s
	W0819 04:08:37.565382    3311 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-837000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-837000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:08:37.581075    3311 out.go:201] 
	W0819 04:08:37.585316    3311 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:08:37.585343    3311 out.go:270] * 
	* 
	W0819 04:08:37.588087    3311 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:08:37.601044    3311 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-837000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-837000 -n multinode-837000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-837000 -n multinode-837000: exit status 7 (66.155375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-837000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.93s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (80.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-837000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-837000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (130.922666ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-837000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-837000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-837000 -- rollout status deployment/busybox: exit status 1 (58.527333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-837000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-837000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-837000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (57.569792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-837000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-837000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-837000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.552875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-837000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-837000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-837000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.611708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-837000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-837000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-837000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.818916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-837000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-837000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-837000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.190458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-837000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-837000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-837000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.631625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-837000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-837000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-837000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.110625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-837000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-837000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-837000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.473958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-837000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-837000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-837000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.239ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-837000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-837000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-837000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.465917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-837000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-837000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-837000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.36675ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-837000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-837000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-837000 -- exec  -- nslookup kubernetes.io: exit status 1 (57.518958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-837000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-837000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-837000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.293375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-837000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-837000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-837000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (55.8625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-837000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-837000 -n multinode-837000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-837000 -n multinode-837000: exit status 7 (29.123917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-837000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (80.80s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-837000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-837000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.823958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-837000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-837000 -n multinode-837000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-837000 -n multinode-837000: exit status 7 (30.031542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-837000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-837000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-837000 -v 3 --alsologtostderr: exit status 83 (41.032292ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-837000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-837000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:09:58.594201    3394 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:09:58.594355    3394 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:09:58.594358    3394 out.go:358] Setting ErrFile to fd 2...
	I0819 04:09:58.594360    3394 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:09:58.594486    3394 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:09:58.594715    3394 mustload.go:65] Loading cluster: multinode-837000
	I0819 04:09:58.594907    3394 config.go:182] Loaded profile config "multinode-837000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:09:58.599229    3394 out.go:177] * The control-plane node multinode-837000 host is not running: state=Stopped
	I0819 04:09:58.602087    3394 out.go:177]   To start a cluster, run: "minikube start -p multinode-837000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-837000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-837000 -n multinode-837000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-837000 -n multinode-837000: exit status 7 (29.894041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-837000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-837000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-837000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (28.919208ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-837000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-837000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-837000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-837000 -n multinode-837000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-837000 -n multinode-837000: exit status 7 (29.313917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-837000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-837000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-837000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-837000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"multinode-837000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-837000 -n multinode-837000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-837000 -n multinode-837000: exit status 7 (28.989167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-837000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-837000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-837000 status --output json --alsologtostderr: exit status 7 (29.693167ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-837000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:09:58.799315    3406 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:09:58.799480    3406 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:09:58.799483    3406 out.go:358] Setting ErrFile to fd 2...
	I0819 04:09:58.799485    3406 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:09:58.799623    3406 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:09:58.799738    3406 out.go:352] Setting JSON to true
	I0819 04:09:58.799755    3406 mustload.go:65] Loading cluster: multinode-837000
	I0819 04:09:58.799797    3406 notify.go:220] Checking for updates...
	I0819 04:09:58.799949    3406 config.go:182] Loaded profile config "multinode-837000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:09:58.799954    3406 status.go:255] checking status of multinode-837000 ...
	I0819 04:09:58.800156    3406 status.go:330] multinode-837000 host status = "Stopped" (err=<nil>)
	I0819 04:09:58.800160    3406 status.go:343] host is not running, skipping remaining checks
	I0819 04:09:58.800163    3406 status.go:257] multinode-837000 status: &{Name:multinode-837000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-837000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-837000 -n multinode-837000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-837000 -n multinode-837000: exit status 7 (29.937167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-837000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-837000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-837000 node stop m03: exit status 85 (49.413125ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-837000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-837000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-837000 status: exit status 7 (29.924917ms)

                                                
                                                
-- stdout --
	multinode-837000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-837000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-837000 status --alsologtostderr: exit status 7 (29.450834ms)

                                                
                                                
-- stdout --
	multinode-837000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:09:58.938845    3414 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:09:58.938987    3414 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:09:58.938990    3414 out.go:358] Setting ErrFile to fd 2...
	I0819 04:09:58.938993    3414 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:09:58.939128    3414 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:09:58.939235    3414 out.go:352] Setting JSON to false
	I0819 04:09:58.939246    3414 mustload.go:65] Loading cluster: multinode-837000
	I0819 04:09:58.939307    3414 notify.go:220] Checking for updates...
	I0819 04:09:58.939444    3414 config.go:182] Loaded profile config "multinode-837000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:09:58.939449    3414 status.go:255] checking status of multinode-837000 ...
	I0819 04:09:58.939650    3414 status.go:330] multinode-837000 host status = "Stopped" (err=<nil>)
	I0819 04:09:58.939654    3414 status.go:343] host is not running, skipping remaining checks
	I0819 04:09:58.939656    3414 status.go:257] multinode-837000 status: &{Name:multinode-837000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-837000 status --alsologtostderr": multinode-837000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-837000 -n multinode-837000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-837000 -n multinode-837000: exit status 7 (29.2595ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-837000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (51.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-837000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-837000 node start m03 -v=7 --alsologtostderr: exit status 85 (44.964042ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:09:58.998551    3418 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:09:58.998754    3418 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:09:58.998757    3418 out.go:358] Setting ErrFile to fd 2...
	I0819 04:09:58.998763    3418 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:09:58.998883    3418 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:09:58.999106    3418 mustload.go:65] Loading cluster: multinode-837000
	I0819 04:09:58.999306    3418 config.go:182] Loaded profile config "multinode-837000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:09:59.004214    3418 out.go:201] 
	W0819 04:09:59.005309    3418 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0819 04:09:59.005319    3418 out.go:270] * 
	* 
	W0819 04:09:59.006976    3418 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:09:59.010114    3418 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0819 04:09:58.998551    3418 out.go:345] Setting OutFile to fd 1 ...
I0819 04:09:58.998754    3418 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 04:09:58.998757    3418 out.go:358] Setting ErrFile to fd 2...
I0819 04:09:58.998763    3418 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 04:09:58.998883    3418 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
I0819 04:09:58.999106    3418 mustload.go:65] Loading cluster: multinode-837000
I0819 04:09:58.999306    3418 config.go:182] Loaded profile config "multinode-837000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 04:09:59.004214    3418 out.go:201] 
W0819 04:09:59.005309    3418 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0819 04:09:59.005319    3418 out.go:270] * 
* 
W0819 04:09:59.006976    3418 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0819 04:09:59.010114    3418 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-837000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-837000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-837000 status -v=7 --alsologtostderr: exit status 7 (29.370666ms)

                                                
                                                
-- stdout --
	multinode-837000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:09:59.042882    3420 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:09:59.043038    3420 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:09:59.043041    3420 out.go:358] Setting ErrFile to fd 2...
	I0819 04:09:59.043043    3420 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:09:59.043160    3420 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:09:59.043280    3420 out.go:352] Setting JSON to false
	I0819 04:09:59.043291    3420 mustload.go:65] Loading cluster: multinode-837000
	I0819 04:09:59.043354    3420 notify.go:220] Checking for updates...
	I0819 04:09:59.043495    3420 config.go:182] Loaded profile config "multinode-837000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:09:59.043500    3420 status.go:255] checking status of multinode-837000 ...
	I0819 04:09:59.043692    3420 status.go:330] multinode-837000 host status = "Stopped" (err=<nil>)
	I0819 04:09:59.043695    3420 status.go:343] host is not running, skipping remaining checks
	I0819 04:09:59.043697    3420 status.go:257] multinode-837000 status: &{Name:multinode-837000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-837000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-837000 status -v=7 --alsologtostderr: exit status 7 (72.968375ms)

                                                
                                                
-- stdout --
	multinode-837000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:09:59.865513    3422 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:09:59.865683    3422 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:09:59.865688    3422 out.go:358] Setting ErrFile to fd 2...
	I0819 04:09:59.865691    3422 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:09:59.865861    3422 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:09:59.866017    3422 out.go:352] Setting JSON to false
	I0819 04:09:59.866032    3422 mustload.go:65] Loading cluster: multinode-837000
	I0819 04:09:59.866067    3422 notify.go:220] Checking for updates...
	I0819 04:09:59.866314    3422 config.go:182] Loaded profile config "multinode-837000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:09:59.866321    3422 status.go:255] checking status of multinode-837000 ...
	I0819 04:09:59.866591    3422 status.go:330] multinode-837000 host status = "Stopped" (err=<nil>)
	I0819 04:09:59.866596    3422 status.go:343] host is not running, skipping remaining checks
	I0819 04:09:59.866599    3422 status.go:257] multinode-837000 status: &{Name:multinode-837000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-837000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-837000 status -v=7 --alsologtostderr: exit status 7 (73.501125ms)

                                                
                                                
-- stdout --
	multinode-837000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:10:01.591355    3424 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:10:01.591545    3424 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:10:01.591550    3424 out.go:358] Setting ErrFile to fd 2...
	I0819 04:10:01.591553    3424 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:10:01.591732    3424 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:10:01.591903    3424 out.go:352] Setting JSON to false
	I0819 04:10:01.591918    3424 mustload.go:65] Loading cluster: multinode-837000
	I0819 04:10:01.591952    3424 notify.go:220] Checking for updates...
	I0819 04:10:01.592224    3424 config.go:182] Loaded profile config "multinode-837000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:10:01.592231    3424 status.go:255] checking status of multinode-837000 ...
	I0819 04:10:01.592528    3424 status.go:330] multinode-837000 host status = "Stopped" (err=<nil>)
	I0819 04:10:01.592533    3424 status.go:343] host is not running, skipping remaining checks
	I0819 04:10:01.592536    3424 status.go:257] multinode-837000 status: &{Name:multinode-837000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-837000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-837000 status -v=7 --alsologtostderr: exit status 7 (74.858708ms)

                                                
                                                
-- stdout --
	multinode-837000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:10:04.214683    3426 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:10:04.214910    3426 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:10:04.214914    3426 out.go:358] Setting ErrFile to fd 2...
	I0819 04:10:04.214918    3426 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:10:04.215104    3426 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:10:04.215289    3426 out.go:352] Setting JSON to false
	I0819 04:10:04.215305    3426 mustload.go:65] Loading cluster: multinode-837000
	I0819 04:10:04.215355    3426 notify.go:220] Checking for updates...
	I0819 04:10:04.215592    3426 config.go:182] Loaded profile config "multinode-837000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:10:04.215599    3426 status.go:255] checking status of multinode-837000 ...
	I0819 04:10:04.215889    3426 status.go:330] multinode-837000 host status = "Stopped" (err=<nil>)
	I0819 04:10:04.215895    3426 status.go:343] host is not running, skipping remaining checks
	I0819 04:10:04.215898    3426 status.go:257] multinode-837000 status: &{Name:multinode-837000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-837000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-837000 status -v=7 --alsologtostderr: exit status 7 (71.715792ms)

                                                
                                                
-- stdout --
	multinode-837000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:10:07.303703    3428 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:10:07.303852    3428 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:10:07.303857    3428 out.go:358] Setting ErrFile to fd 2...
	I0819 04:10:07.303860    3428 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:10:07.304020    3428 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:10:07.304170    3428 out.go:352] Setting JSON to false
	I0819 04:10:07.304186    3428 mustload.go:65] Loading cluster: multinode-837000
	I0819 04:10:07.304223    3428 notify.go:220] Checking for updates...
	I0819 04:10:07.304473    3428 config.go:182] Loaded profile config "multinode-837000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:10:07.304479    3428 status.go:255] checking status of multinode-837000 ...
	I0819 04:10:07.304750    3428 status.go:330] multinode-837000 host status = "Stopped" (err=<nil>)
	I0819 04:10:07.304755    3428 status.go:343] host is not running, skipping remaining checks
	I0819 04:10:07.304758    3428 status.go:257] multinode-837000 status: &{Name:multinode-837000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-837000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-837000 status -v=7 --alsologtostderr: exit status 7 (72.167792ms)

                                                
                                                
-- stdout --
	multinode-837000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:10:12.798769    3433 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:10:12.798951    3433 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:10:12.798955    3433 out.go:358] Setting ErrFile to fd 2...
	I0819 04:10:12.798959    3433 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:10:12.799138    3433 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:10:12.799300    3433 out.go:352] Setting JSON to false
	I0819 04:10:12.799314    3433 mustload.go:65] Loading cluster: multinode-837000
	I0819 04:10:12.799354    3433 notify.go:220] Checking for updates...
	I0819 04:10:12.799614    3433 config.go:182] Loaded profile config "multinode-837000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:10:12.799620    3433 status.go:255] checking status of multinode-837000 ...
	I0819 04:10:12.799911    3433 status.go:330] multinode-837000 host status = "Stopped" (err=<nil>)
	I0819 04:10:12.799916    3433 status.go:343] host is not running, skipping remaining checks
	I0819 04:10:12.799919    3433 status.go:257] multinode-837000 status: &{Name:multinode-837000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
E0819 04:10:18.684439    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/functional-522000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-837000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-837000 status -v=7 --alsologtostderr: exit status 7 (72.84675ms)

                                                
                                                
-- stdout --
	multinode-837000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:10:20.109324    3435 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:10:20.109527    3435 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:10:20.109531    3435 out.go:358] Setting ErrFile to fd 2...
	I0819 04:10:20.109535    3435 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:10:20.109708    3435 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:10:20.109858    3435 out.go:352] Setting JSON to false
	I0819 04:10:20.109872    3435 mustload.go:65] Loading cluster: multinode-837000
	I0819 04:10:20.109912    3435 notify.go:220] Checking for updates...
	I0819 04:10:20.110106    3435 config.go:182] Loaded profile config "multinode-837000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:10:20.110119    3435 status.go:255] checking status of multinode-837000 ...
	I0819 04:10:20.110389    3435 status.go:330] multinode-837000 host status = "Stopped" (err=<nil>)
	I0819 04:10:20.110394    3435 status.go:343] host is not running, skipping remaining checks
	I0819 04:10:20.110397    3435 status.go:257] multinode-837000 status: &{Name:multinode-837000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-837000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-837000 status -v=7 --alsologtostderr: exit status 7 (71.399375ms)

                                                
                                                
-- stdout --
	multinode-837000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:10:26.454299    3437 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:10:26.454496    3437 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:10:26.454501    3437 out.go:358] Setting ErrFile to fd 2...
	I0819 04:10:26.454510    3437 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:10:26.454702    3437 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:10:26.454854    3437 out.go:352] Setting JSON to false
	I0819 04:10:26.454870    3437 mustload.go:65] Loading cluster: multinode-837000
	I0819 04:10:26.454908    3437 notify.go:220] Checking for updates...
	I0819 04:10:26.455128    3437 config.go:182] Loaded profile config "multinode-837000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:10:26.455139    3437 status.go:255] checking status of multinode-837000 ...
	I0819 04:10:26.455391    3437 status.go:330] multinode-837000 host status = "Stopped" (err=<nil>)
	I0819 04:10:26.455396    3437 status.go:343] host is not running, skipping remaining checks
	I0819 04:10:26.455399    3437 status.go:257] multinode-837000 status: &{Name:multinode-837000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-837000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-837000 status -v=7 --alsologtostderr: exit status 7 (72.587834ms)

                                                
                                                
-- stdout --
	multinode-837000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:10:50.820905    3447 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:10:50.821110    3447 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:10:50.821115    3447 out.go:358] Setting ErrFile to fd 2...
	I0819 04:10:50.821118    3447 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:10:50.821312    3447 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:10:50.821481    3447 out.go:352] Setting JSON to false
	I0819 04:10:50.821497    3447 mustload.go:65] Loading cluster: multinode-837000
	I0819 04:10:50.821521    3447 notify.go:220] Checking for updates...
	I0819 04:10:50.821800    3447 config.go:182] Loaded profile config "multinode-837000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:10:50.821808    3447 status.go:255] checking status of multinode-837000 ...
	I0819 04:10:50.822093    3447 status.go:330] multinode-837000 host status = "Stopped" (err=<nil>)
	I0819 04:10:50.822099    3447 status.go:343] host is not running, skipping remaining checks
	I0819 04:10:50.822102    3447 status.go:257] multinode-837000 status: &{Name:multinode-837000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-837000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-837000 -n multinode-837000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-837000 -n multinode-837000: exit status 7 (33.345875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-837000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (51.89s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (9.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-837000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-837000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-837000: (3.860754292s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-837000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-837000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.226451375s)

                                                
                                                
-- stdout --
	* [multinode-837000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-837000" primary control-plane node in "multinode-837000" cluster
	* Restarting existing qemu2 VM for "multinode-837000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-837000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:10:54.811502    3473 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:10:54.811673    3473 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:10:54.811678    3473 out.go:358] Setting ErrFile to fd 2...
	I0819 04:10:54.811681    3473 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:10:54.811843    3473 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:10:54.813132    3473 out.go:352] Setting JSON to false
	I0819 04:10:54.832960    3473 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2417,"bootTime":1724063437,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0819 04:10:54.833033    3473 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:10:54.837214    3473 out.go:177] * [multinode-837000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:10:54.844058    3473 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 04:10:54.844106    3473 notify.go:220] Checking for updates...
	I0819 04:10:54.851080    3473 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	I0819 04:10:54.854021    3473 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:10:54.857053    3473 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:10:54.860072    3473 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	I0819 04:10:54.863042    3473 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:10:54.866398    3473 config.go:182] Loaded profile config "multinode-837000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:10:54.866459    3473 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:10:54.871074    3473 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 04:10:54.878013    3473 start.go:297] selected driver: qemu2
	I0819 04:10:54.878019    3473 start.go:901] validating driver "qemu2" against &{Name:multinode-837000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:multinode-837000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:10:54.878070    3473 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:10:54.880356    3473 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 04:10:54.880381    3473 cni.go:84] Creating CNI manager for ""
	I0819 04:10:54.880387    3473 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0819 04:10:54.880430    3473 start.go:340] cluster config:
	{Name:multinode-837000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-837000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:10:54.883954    3473 iso.go:125] acquiring lock: {Name:mk9bbf20f477d4c64990a7e4e7281f35cf7cfcc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:10:54.891021    3473 out.go:177] * Starting "multinode-837000" primary control-plane node in "multinode-837000" cluster
	I0819 04:10:54.895075    3473 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:10:54.895094    3473 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:10:54.895107    3473 cache.go:56] Caching tarball of preloaded images
	I0819 04:10:54.895180    3473 preload.go:172] Found /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:10:54.895186    3473 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:10:54.895253    3473 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/multinode-837000/config.json ...
	I0819 04:10:54.895712    3473 start.go:360] acquireMachinesLock for multinode-837000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:10:54.895747    3473 start.go:364] duration metric: took 28.916µs to acquireMachinesLock for "multinode-837000"
	I0819 04:10:54.895756    3473 start.go:96] Skipping create...Using existing machine configuration
	I0819 04:10:54.895763    3473 fix.go:54] fixHost starting: 
	I0819 04:10:54.895892    3473 fix.go:112] recreateIfNeeded on multinode-837000: state=Stopped err=<nil>
	W0819 04:10:54.895901    3473 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 04:10:54.899075    3473 out.go:177] * Restarting existing qemu2 VM for "multinode-837000" ...
	I0819 04:10:54.907099    3473 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:10:54.907144    3473 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/multinode-837000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/multinode-837000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/multinode-837000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:14:2b:ce:27:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/multinode-837000/disk.qcow2
	I0819 04:10:54.909227    3473 main.go:141] libmachine: STDOUT: 
	I0819 04:10:54.909249    3473 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:10:54.909286    3473 fix.go:56] duration metric: took 13.523834ms for fixHost
	I0819 04:10:54.909291    3473 start.go:83] releasing machines lock for "multinode-837000", held for 13.539958ms
	W0819 04:10:54.909298    3473 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:10:54.909326    3473 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:10:54.909331    3473 start.go:729] Will try again in 5 seconds ...
	I0819 04:10:59.911550    3473 start.go:360] acquireMachinesLock for multinode-837000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:10:59.912035    3473 start.go:364] duration metric: took 315.75µs to acquireMachinesLock for "multinode-837000"
	I0819 04:10:59.912186    3473 start.go:96] Skipping create...Using existing machine configuration
	I0819 04:10:59.912210    3473 fix.go:54] fixHost starting: 
	I0819 04:10:59.912954    3473 fix.go:112] recreateIfNeeded on multinode-837000: state=Stopped err=<nil>
	W0819 04:10:59.912980    3473 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 04:10:59.922427    3473 out.go:177] * Restarting existing qemu2 VM for "multinode-837000" ...
	I0819 04:10:59.926437    3473 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:10:59.926737    3473 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/multinode-837000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/multinode-837000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/multinode-837000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:14:2b:ce:27:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/multinode-837000/disk.qcow2
	I0819 04:10:59.936612    3473 main.go:141] libmachine: STDOUT: 
	I0819 04:10:59.936698    3473 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:10:59.936783    3473 fix.go:56] duration metric: took 24.575458ms for fixHost
	I0819 04:10:59.936808    3473 start.go:83] releasing machines lock for "multinode-837000", held for 24.750833ms
	W0819 04:10:59.937046    3473 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-837000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-837000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:10:59.944477    3473 out.go:201] 
	W0819 04:10:59.948483    3473 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:10:59.948509    3473 out.go:270] * 
	* 
	W0819 04:10:59.950803    3473 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:10:59.960440    3473 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-837000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-837000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-837000 -n multinode-837000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-837000 -n multinode-837000: exit status 7 (33.092917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-837000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (9.22s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-837000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-837000 node delete m03: exit status 83 (39.237541ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-837000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-837000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-837000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-837000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-837000 status --alsologtostderr: exit status 7 (28.979166ms)

                                                
                                                
-- stdout --
	multinode-837000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:11:00.143910    3487 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:11:00.144046    3487 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:11:00.144049    3487 out.go:358] Setting ErrFile to fd 2...
	I0819 04:11:00.144052    3487 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:11:00.144171    3487 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:11:00.144291    3487 out.go:352] Setting JSON to false
	I0819 04:11:00.144303    3487 mustload.go:65] Loading cluster: multinode-837000
	I0819 04:11:00.144359    3487 notify.go:220] Checking for updates...
	I0819 04:11:00.144482    3487 config.go:182] Loaded profile config "multinode-837000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:11:00.144487    3487 status.go:255] checking status of multinode-837000 ...
	I0819 04:11:00.144679    3487 status.go:330] multinode-837000 host status = "Stopped" (err=<nil>)
	I0819 04:11:00.144683    3487 status.go:343] host is not running, skipping remaining checks
	I0819 04:11:00.144685    3487 status.go:257] multinode-837000 status: &{Name:multinode-837000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-837000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-837000 -n multinode-837000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-837000 -n multinode-837000: exit status 7 (30.199625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-837000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (2.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-837000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-837000 stop: (1.930529792s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-837000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-837000 status: exit status 7 (64.117417ms)

                                                
                                                
-- stdout --
	multinode-837000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-837000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-837000 status --alsologtostderr: exit status 7 (32.824375ms)

                                                
                                                
-- stdout --
	multinode-837000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:11:02.202026    3503 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:11:02.202178    3503 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:11:02.202182    3503 out.go:358] Setting ErrFile to fd 2...
	I0819 04:11:02.202184    3503 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:11:02.202322    3503 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:11:02.202434    3503 out.go:352] Setting JSON to false
	I0819 04:11:02.202446    3503 mustload.go:65] Loading cluster: multinode-837000
	I0819 04:11:02.202506    3503 notify.go:220] Checking for updates...
	I0819 04:11:02.202644    3503 config.go:182] Loaded profile config "multinode-837000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:11:02.202649    3503 status.go:255] checking status of multinode-837000 ...
	I0819 04:11:02.202856    3503 status.go:330] multinode-837000 host status = "Stopped" (err=<nil>)
	I0819 04:11:02.202860    3503 status.go:343] host is not running, skipping remaining checks
	I0819 04:11:02.202862    3503 status.go:257] multinode-837000 status: &{Name:multinode-837000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-837000 status --alsologtostderr": multinode-837000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-837000 status --alsologtostderr": multinode-837000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-837000 -n multinode-837000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-837000 -n multinode-837000: exit status 7 (29.4315ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-837000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (2.06s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-837000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-837000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.188072541s)

                                                
                                                
-- stdout --
	* [multinode-837000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-837000" primary control-plane node in "multinode-837000" cluster
	* Restarting existing qemu2 VM for "multinode-837000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-837000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:11:02.261049    3507 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:11:02.261187    3507 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:11:02.261191    3507 out.go:358] Setting ErrFile to fd 2...
	I0819 04:11:02.261194    3507 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:11:02.261324    3507 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:11:02.262370    3507 out.go:352] Setting JSON to false
	I0819 04:11:02.278350    3507 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2425,"bootTime":1724063437,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0819 04:11:02.278419    3507 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:11:02.283726    3507 out.go:177] * [multinode-837000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:11:02.290643    3507 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 04:11:02.290691    3507 notify.go:220] Checking for updates...
	I0819 04:11:02.298665    3507 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	I0819 04:11:02.301665    3507 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:11:02.304616    3507 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:11:02.307736    3507 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	I0819 04:11:02.310637    3507 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:11:02.313960    3507 config.go:182] Loaded profile config "multinode-837000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:11:02.314221    3507 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:11:02.318621    3507 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 04:11:02.325654    3507 start.go:297] selected driver: qemu2
	I0819 04:11:02.325664    3507 start.go:901] validating driver "qemu2" against &{Name:multinode-837000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:multinode-837000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:11:02.325734    3507 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:11:02.328012    3507 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 04:11:02.328038    3507 cni.go:84] Creating CNI manager for ""
	I0819 04:11:02.328043    3507 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0819 04:11:02.328090    3507 start.go:340] cluster config:
	{Name:multinode-837000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-837000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:11:02.331574    3507 iso.go:125] acquiring lock: {Name:mk9bbf20f477d4c64990a7e4e7281f35cf7cfcc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:11:02.339592    3507 out.go:177] * Starting "multinode-837000" primary control-plane node in "multinode-837000" cluster
	I0819 04:11:02.343652    3507 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:11:02.343669    3507 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:11:02.343680    3507 cache.go:56] Caching tarball of preloaded images
	I0819 04:11:02.343737    3507 preload.go:172] Found /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:11:02.343743    3507 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:11:02.343807    3507 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/multinode-837000/config.json ...
	I0819 04:11:02.344258    3507 start.go:360] acquireMachinesLock for multinode-837000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:11:02.344287    3507 start.go:364] duration metric: took 22.166µs to acquireMachinesLock for "multinode-837000"
	I0819 04:11:02.344299    3507 start.go:96] Skipping create...Using existing machine configuration
	I0819 04:11:02.344305    3507 fix.go:54] fixHost starting: 
	I0819 04:11:02.344427    3507 fix.go:112] recreateIfNeeded on multinode-837000: state=Stopped err=<nil>
	W0819 04:11:02.344435    3507 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 04:11:02.352607    3507 out.go:177] * Restarting existing qemu2 VM for "multinode-837000" ...
	I0819 04:11:02.356645    3507 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:11:02.356682    3507 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/multinode-837000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/multinode-837000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/multinode-837000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:14:2b:ce:27:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/multinode-837000/disk.qcow2
	I0819 04:11:02.358749    3507 main.go:141] libmachine: STDOUT: 
	I0819 04:11:02.358773    3507 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:11:02.358804    3507 fix.go:56] duration metric: took 14.5005ms for fixHost
	I0819 04:11:02.358809    3507 start.go:83] releasing machines lock for "multinode-837000", held for 14.517833ms
	W0819 04:11:02.358816    3507 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:11:02.358853    3507 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:11:02.358858    3507 start.go:729] Will try again in 5 seconds ...
	I0819 04:11:07.360981    3507 start.go:360] acquireMachinesLock for multinode-837000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:11:07.361532    3507 start.go:364] duration metric: took 426.292µs to acquireMachinesLock for "multinode-837000"
	I0819 04:11:07.361696    3507 start.go:96] Skipping create...Using existing machine configuration
	I0819 04:11:07.361716    3507 fix.go:54] fixHost starting: 
	I0819 04:11:07.362422    3507 fix.go:112] recreateIfNeeded on multinode-837000: state=Stopped err=<nil>
	W0819 04:11:07.362448    3507 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 04:11:07.372007    3507 out.go:177] * Restarting existing qemu2 VM for "multinode-837000" ...
	I0819 04:11:07.375120    3507 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:11:07.375416    3507 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/multinode-837000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/multinode-837000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/multinode-837000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:14:2b:ce:27:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/multinode-837000/disk.qcow2
	I0819 04:11:07.385141    3507 main.go:141] libmachine: STDOUT: 
	I0819 04:11:07.385203    3507 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:11:07.385300    3507 fix.go:56] duration metric: took 23.586916ms for fixHost
	I0819 04:11:07.385319    3507 start.go:83] releasing machines lock for "multinode-837000", held for 23.764625ms
	W0819 04:11:07.385532    3507 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-837000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-837000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:11:07.393155    3507 out.go:201] 
	W0819 04:11:07.397230    3507 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:11:07.397264    3507 out.go:270] * 
	* 
	W0819 04:11:07.399817    3507 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:11:07.408162    3507 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-837000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-837000 -n multinode-837000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-837000 -n multinode-837000: exit status 7 (66.857916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-837000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-837000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-837000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-837000-m01 --driver=qemu2 : exit status 80 (10.114758167s)

                                                
                                                
-- stdout --
	* [multinode-837000-m01] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-837000-m01" primary control-plane node in "multinode-837000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-837000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-837000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-837000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-837000-m02 --driver=qemu2 : exit status 80 (9.874603625s)

                                                
                                                
-- stdout --
	* [multinode-837000-m02] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-837000-m02" primary control-plane node in "multinode-837000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-837000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-837000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-837000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-837000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-837000: exit status 83 (79.105084ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-837000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-837000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-837000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-837000 -n multinode-837000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-837000 -n multinode-837000: exit status 7 (30.2095ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-837000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.21s)

                                                
                                    
x
+
TestPreload (10.11s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-848000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-848000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.960925125s)

                                                
                                                
-- stdout --
	* [test-preload-848000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-848000" primary control-plane node in "test-preload-848000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-848000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:11:27.834186    3560 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:11:27.834317    3560 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:11:27.834320    3560 out.go:358] Setting ErrFile to fd 2...
	I0819 04:11:27.834323    3560 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:11:27.834436    3560 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:11:27.835541    3560 out.go:352] Setting JSON to false
	I0819 04:11:27.851787    3560 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2450,"bootTime":1724063437,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0819 04:11:27.851859    3560 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:11:27.858820    3560 out.go:177] * [test-preload-848000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:11:27.866792    3560 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 04:11:27.866850    3560 notify.go:220] Checking for updates...
	I0819 04:11:27.882807    3560 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	I0819 04:11:27.885715    3560 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:11:27.888839    3560 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:11:27.891799    3560 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	I0819 04:11:27.894751    3560 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:11:27.898060    3560 config.go:182] Loaded profile config "multinode-837000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:11:27.898112    3560 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:11:27.902822    3560 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 04:11:27.909806    3560 start.go:297] selected driver: qemu2
	I0819 04:11:27.909814    3560 start.go:901] validating driver "qemu2" against <nil>
	I0819 04:11:27.909822    3560 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:11:27.912335    3560 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 04:11:27.915759    3560 out.go:177] * Automatically selected the socket_vmnet network
	I0819 04:11:27.918828    3560 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 04:11:27.918855    3560 cni.go:84] Creating CNI manager for ""
	I0819 04:11:27.918872    3560 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:11:27.918876    3560 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 04:11:27.918921    3560 start.go:340] cluster config:
	{Name:test-preload-848000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-848000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:11:27.922949    3560 iso.go:125] acquiring lock: {Name:mk9bbf20f477d4c64990a7e4e7281f35cf7cfcc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:11:27.929781    3560 out.go:177] * Starting "test-preload-848000" primary control-plane node in "test-preload-848000" cluster
	I0819 04:11:27.933740    3560 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0819 04:11:27.933826    3560 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/test-preload-848000/config.json ...
	I0819 04:11:27.933847    3560 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/test-preload-848000/config.json: {Name:mkec67172f2df7b4503030e4c58e95835aec65b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:11:27.933859    3560 cache.go:107] acquiring lock: {Name:mk3f3e925478163a3af4d89500c009678704e9a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:11:27.933859    3560 cache.go:107] acquiring lock: {Name:mk94eb0796edf83e48d15671021c5b007617e7ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:11:27.933885    3560 cache.go:107] acquiring lock: {Name:mk261aeb6a70ff693c068a7e239f8240b61e1643 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:11:27.934062    3560 cache.go:107] acquiring lock: {Name:mkb76fd824e5e748957f83215e5ce11d909f9f60 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:11:27.934115    3560 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0819 04:11:27.934122    3560 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0819 04:11:27.934113    3560 cache.go:107] acquiring lock: {Name:mk895501591e171b9579c225ecc957f675bedf85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:11:27.934137    3560 cache.go:107] acquiring lock: {Name:mkcd3219edc090bee3ae885b8ed2388d56080546 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:11:27.934155    3560 cache.go:107] acquiring lock: {Name:mk228eb4819b56409e5056bdada21c25a2212de4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:11:27.934312    3560 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0819 04:11:27.934318    3560 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 04:11:27.934182    3560 cache.go:107] acquiring lock: {Name:mk6c8e1d02e9936f44ede034d7fd08504c57714e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:11:27.934374    3560 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0819 04:11:27.934389    3560 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 04:11:27.934469    3560 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0819 04:11:27.934466    3560 start.go:360] acquireMachinesLock for test-preload-848000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:11:27.934484    3560 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0819 04:11:27.934503    3560 start.go:364] duration metric: took 28.125µs to acquireMachinesLock for "test-preload-848000"
	I0819 04:11:27.934517    3560 start.go:93] Provisioning new machine with config: &{Name:test-preload-848000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-848000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:11:27.934564    3560 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:11:27.942739    3560 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 04:11:27.948844    3560 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0819 04:11:27.948878    3560 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0819 04:11:27.948964    3560 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0819 04:11:27.950938    3560 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0819 04:11:27.951010    3560 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0819 04:11:27.951020    3560 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 04:11:27.951048    3560 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0819 04:11:27.951020    3560 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 04:11:27.961757    3560 start.go:159] libmachine.API.Create for "test-preload-848000" (driver="qemu2")
	I0819 04:11:27.961785    3560 client.go:168] LocalClient.Create starting
	I0819 04:11:27.961854    3560 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem
	I0819 04:11:27.961891    3560 main.go:141] libmachine: Decoding PEM data...
	I0819 04:11:27.961902    3560 main.go:141] libmachine: Parsing certificate...
	I0819 04:11:27.961948    3560 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/cert.pem
	I0819 04:11:27.961972    3560 main.go:141] libmachine: Decoding PEM data...
	I0819 04:11:27.961979    3560 main.go:141] libmachine: Parsing certificate...
	I0819 04:11:27.962332    3560 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19476-967/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19476-967/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:11:28.114183    3560 main.go:141] libmachine: Creating SSH key...
	I0819 04:11:28.337265    3560 main.go:141] libmachine: Creating Disk image...
	I0819 04:11:28.337288    3560 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:11:28.337474    3560 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19476-967/.minikube/machines/test-preload-848000/disk.qcow2.raw /Users/jenkins/minikube-integration/19476-967/.minikube/machines/test-preload-848000/disk.qcow2
	I0819 04:11:28.346857    3560 main.go:141] libmachine: STDOUT: 
	I0819 04:11:28.346876    3560 main.go:141] libmachine: STDERR: 
	I0819 04:11:28.346921    3560 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/test-preload-848000/disk.qcow2 +20000M
	I0819 04:11:28.355618    3560 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:11:28.355639    3560 main.go:141] libmachine: STDERR: 
	I0819 04:11:28.355649    3560 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19476-967/.minikube/machines/test-preload-848000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19476-967/.minikube/machines/test-preload-848000/disk.qcow2
	I0819 04:11:28.355654    3560 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:11:28.355668    3560 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:11:28.355697    3560 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/test-preload-848000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/test-preload-848000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/test-preload-848000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:4d:fb:f1:b5:98 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/test-preload-848000/disk.qcow2
	I0819 04:11:28.357536    3560 main.go:141] libmachine: STDOUT: 
	I0819 04:11:28.357552    3560 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:11:28.357567    3560 client.go:171] duration metric: took 395.782709ms to LocalClient.Create
	I0819 04:11:28.409955    3560 cache.go:162] opening:  /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0819 04:11:28.411487    3560 cache.go:162] opening:  /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0819 04:11:28.416139    3560 cache.go:162] opening:  /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0819 04:11:28.422741    3560 cache.go:162] opening:  /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0819 04:11:28.457621    3560 cache.go:162] opening:  /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0819 04:11:28.475633    3560 cache.go:162] opening:  /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W0819 04:11:28.520244    3560 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0819 04:11:28.520280    3560 cache.go:162] opening:  /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0819 04:11:28.578696    3560 cache.go:157] /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0819 04:11:28.578749    3560 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 644.651666ms
	I0819 04:11:28.578778    3560 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0819 04:11:28.884980    3560 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0819 04:11:28.885091    3560 cache.go:162] opening:  /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0819 04:11:29.115508    3560 cache.go:157] /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0819 04:11:29.115543    3560 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.181700166s
	I0819 04:11:29.115562    3560 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0819 04:11:30.357717    3560 start.go:128] duration metric: took 2.423158292s to createHost
	I0819 04:11:30.357778    3560 start.go:83] releasing machines lock for "test-preload-848000", held for 2.423298958s
	W0819 04:11:30.357868    3560 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:11:30.370977    3560 out.go:177] * Deleting "test-preload-848000" in qemu2 ...
	W0819 04:11:30.402475    3560 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:11:30.402510    3560 start.go:729] Will try again in 5 seconds ...
	I0819 04:11:30.608705    3560 cache.go:157] /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0819 04:11:30.608754    3560 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.674752666s
	I0819 04:11:30.608814    3560 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0819 04:11:30.880021    3560 cache.go:157] /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0819 04:11:30.880069    3560 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 2.945998583s
	I0819 04:11:30.880113    3560 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0819 04:11:32.931348    3560 cache.go:157] /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0819 04:11:32.931397    3560 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.997609834s
	I0819 04:11:32.931423    3560 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0819 04:11:33.400291    3560 cache.go:157] /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0819 04:11:33.400365    3560 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.4662825s
	I0819 04:11:33.400405    3560 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0819 04:11:34.421394    3560 cache.go:157] /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0819 04:11:34.421443    3560 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.487655542s
	I0819 04:11:34.421467    3560 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0819 04:11:35.402654    3560 start.go:360] acquireMachinesLock for test-preload-848000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:11:35.403109    3560 start.go:364] duration metric: took 369.042µs to acquireMachinesLock for "test-preload-848000"
	I0819 04:11:35.403244    3560 start.go:93] Provisioning new machine with config: &{Name:test-preload-848000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-848000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:11:35.403477    3560 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:11:35.413946    3560 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 04:11:35.464942    3560 start.go:159] libmachine.API.Create for "test-preload-848000" (driver="qemu2")
	I0819 04:11:35.465006    3560 client.go:168] LocalClient.Create starting
	I0819 04:11:35.465266    3560 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem
	I0819 04:11:35.465339    3560 main.go:141] libmachine: Decoding PEM data...
	I0819 04:11:35.465359    3560 main.go:141] libmachine: Parsing certificate...
	I0819 04:11:35.465432    3560 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/cert.pem
	I0819 04:11:35.465476    3560 main.go:141] libmachine: Decoding PEM data...
	I0819 04:11:35.465489    3560 main.go:141] libmachine: Parsing certificate...
	I0819 04:11:35.466016    3560 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19476-967/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19476-967/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:11:35.626284    3560 main.go:141] libmachine: Creating SSH key...
	I0819 04:11:35.695155    3560 main.go:141] libmachine: Creating Disk image...
	I0819 04:11:35.695160    3560 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:11:35.695325    3560 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19476-967/.minikube/machines/test-preload-848000/disk.qcow2.raw /Users/jenkins/minikube-integration/19476-967/.minikube/machines/test-preload-848000/disk.qcow2
	I0819 04:11:35.704874    3560 main.go:141] libmachine: STDOUT: 
	I0819 04:11:35.704892    3560 main.go:141] libmachine: STDERR: 
	I0819 04:11:35.704937    3560 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/test-preload-848000/disk.qcow2 +20000M
	I0819 04:11:35.712887    3560 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:11:35.712902    3560 main.go:141] libmachine: STDERR: 
	I0819 04:11:35.712919    3560 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19476-967/.minikube/machines/test-preload-848000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19476-967/.minikube/machines/test-preload-848000/disk.qcow2
	I0819 04:11:35.712923    3560 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:11:35.712935    3560 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:11:35.712968    3560 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/test-preload-848000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/test-preload-848000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/test-preload-848000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:ba:77:05:12:fb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/test-preload-848000/disk.qcow2
	I0819 04:11:35.714720    3560 main.go:141] libmachine: STDOUT: 
	I0819 04:11:35.714738    3560 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:11:35.714758    3560 client.go:171] duration metric: took 249.633334ms to LocalClient.Create
	I0819 04:11:37.715544    3560 start.go:128] duration metric: took 2.312050291s to createHost
	I0819 04:11:37.715598    3560 start.go:83] releasing machines lock for "test-preload-848000", held for 2.312495041s
	W0819 04:11:37.715892    3560 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-848000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-848000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:11:37.733411    3560 out.go:201] 
	W0819 04:11:37.737500    3560 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:11:37.737540    3560 out.go:270] * 
	* 
	W0819 04:11:37.739961    3560 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:11:37.749407    3560 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-848000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-08-19 04:11:37.770192 -0700 PDT m=+2205.990079334
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-848000 -n test-preload-848000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-848000 -n test-preload-848000: exit status 7 (63.495041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-848000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-848000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-848000
--- FAIL: TestPreload (10.11s)

                                                
                                    
x
+
TestScheduledStopUnix (10.06s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-548000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-548000 --memory=2048 --driver=qemu2 : exit status 80 (9.901756458s)

                                                
                                                
-- stdout --
	* [scheduled-stop-548000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-548000" primary control-plane node in "scheduled-stop-548000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-548000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-548000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-548000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-548000" primary control-plane node in "scheduled-stop-548000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-548000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-548000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-08-19 04:11:47.818886 -0700 PDT m=+2216.038912709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-548000 -n scheduled-stop-548000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-548000 -n scheduled-stop-548000: exit status 7 (67.419084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-548000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-548000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-548000
--- FAIL: TestScheduledStopUnix (10.06s)

                                                
                                    
x
+
TestSkaffold (12.58s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe1764293148 version
skaffold_test.go:59: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe1764293148 version: (1.052425375s)
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-202000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-202000 --memory=2600 --driver=qemu2 : exit status 80 (9.860046916s)

                                                
                                                
-- stdout --
	* [skaffold-202000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-202000" primary control-plane node in "skaffold-202000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-202000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-202000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-202000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-202000" primary control-plane node in "skaffold-202000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-202000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-202000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-08-19 04:12:00.410534 -0700 PDT m=+2228.630734876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-202000 -n skaffold-202000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-202000 -n skaffold-202000: exit status 7 (64.052209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-202000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-202000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-202000
--- FAIL: TestSkaffold (12.58s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (607.28s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1380321318 start -p running-upgrade-079000 --memory=2200 --vm-driver=qemu2 
E0819 04:13:32.787986    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/addons-758000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1380321318 start -p running-upgrade-079000 --memory=2200 --vm-driver=qemu2 : (1m7.918308292s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-079000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0819 04:15:18.683122    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/functional-522000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-079000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m24.838008083s)

                                                
                                                
-- stdout --
	* [running-upgrade-079000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-079000" primary control-plane node in "running-upgrade-079000" cluster
	* Updating the running qemu2 "running-upgrade-079000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:13:50.088489    3949 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:13:50.088636    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:13:50.088644    3949 out.go:358] Setting ErrFile to fd 2...
	I0819 04:13:50.088646    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:13:50.088795    3949 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:13:50.090128    3949 out.go:352] Setting JSON to false
	I0819 04:13:50.106583    3949 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2593,"bootTime":1724063437,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0819 04:13:50.106659    3949 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:13:50.111766    3949 out.go:177] * [running-upgrade-079000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:13:50.118604    3949 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 04:13:50.118645    3949 notify.go:220] Checking for updates...
	I0819 04:13:50.126721    3949 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	I0819 04:13:50.130712    3949 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:13:50.133808    3949 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:13:50.136778    3949 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	I0819 04:13:50.145778    3949 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:13:50.150031    3949 config.go:182] Loaded profile config "running-upgrade-079000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 04:13:50.154773    3949 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0819 04:13:50.158797    3949 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:13:50.162722    3949 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 04:13:50.169774    3949 start.go:297] selected driver: qemu2
	I0819 04:13:50.169779    3949 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-079000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50264 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-079000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0819 04:13:50.169837    3949 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:13:50.172341    3949 cni.go:84] Creating CNI manager for ""
	I0819 04:13:50.172359    3949 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:13:50.172388    3949 start.go:340] cluster config:
	{Name:running-upgrade-079000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50264 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-079000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0819 04:13:50.172443    3949 iso.go:125] acquiring lock: {Name:mk9bbf20f477d4c64990a7e4e7281f35cf7cfcc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:13:50.179775    3949 out.go:177] * Starting "running-upgrade-079000" primary control-plane node in "running-upgrade-079000" cluster
	I0819 04:13:50.183642    3949 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0819 04:13:50.183657    3949 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0819 04:13:50.183665    3949 cache.go:56] Caching tarball of preloaded images
	I0819 04:13:50.183718    3949 preload.go:172] Found /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:13:50.183724    3949 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0819 04:13:50.183774    3949 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/running-upgrade-079000/config.json ...
	I0819 04:13:50.184104    3949 start.go:360] acquireMachinesLock for running-upgrade-079000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:13:50.184136    3949 start.go:364] duration metric: took 26.667µs to acquireMachinesLock for "running-upgrade-079000"
	I0819 04:13:50.184145    3949 start.go:96] Skipping create...Using existing machine configuration
	I0819 04:13:50.184151    3949 fix.go:54] fixHost starting: 
	I0819 04:13:50.184720    3949 fix.go:112] recreateIfNeeded on running-upgrade-079000: state=Running err=<nil>
	W0819 04:13:50.184728    3949 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 04:13:50.187804    3949 out.go:177] * Updating the running qemu2 "running-upgrade-079000" VM ...
	I0819 04:13:50.194758    3949 machine.go:93] provisionDockerMachine start ...
	I0819 04:13:50.194816    3949 main.go:141] libmachine: Using SSH client type: native
	I0819 04:13:50.194962    3949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104dd85a0] 0x104ddae00 <nil>  [] 0s} localhost 50232 <nil> <nil>}
	I0819 04:13:50.194968    3949 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 04:13:50.241655    3949 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-079000
	
	I0819 04:13:50.241670    3949 buildroot.go:166] provisioning hostname "running-upgrade-079000"
	I0819 04:13:50.241714    3949 main.go:141] libmachine: Using SSH client type: native
	I0819 04:13:50.241826    3949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104dd85a0] 0x104ddae00 <nil>  [] 0s} localhost 50232 <nil> <nil>}
	I0819 04:13:50.241831    3949 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-079000 && echo "running-upgrade-079000" | sudo tee /etc/hostname
	I0819 04:13:50.295777    3949 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-079000
	
	I0819 04:13:50.295831    3949 main.go:141] libmachine: Using SSH client type: native
	I0819 04:13:50.295952    3949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104dd85a0] 0x104ddae00 <nil>  [] 0s} localhost 50232 <nil> <nil>}
	I0819 04:13:50.295960    3949 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-079000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-079000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-079000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 04:13:50.342718    3949 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 04:13:50.342731    3949 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19476-967/.minikube CaCertPath:/Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19476-967/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19476-967/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19476-967/.minikube}
	I0819 04:13:50.342746    3949 buildroot.go:174] setting up certificates
	I0819 04:13:50.342755    3949 provision.go:84] configureAuth start
	I0819 04:13:50.342763    3949 provision.go:143] copyHostCerts
	I0819 04:13:50.342838    3949 exec_runner.go:144] found /Users/jenkins/minikube-integration/19476-967/.minikube/ca.pem, removing ...
	I0819 04:13:50.342844    3949 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19476-967/.minikube/ca.pem
	I0819 04:13:50.342990    3949 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19476-967/.minikube/ca.pem (1078 bytes)
	I0819 04:13:50.343150    3949 exec_runner.go:144] found /Users/jenkins/minikube-integration/19476-967/.minikube/cert.pem, removing ...
	I0819 04:13:50.343153    3949 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19476-967/.minikube/cert.pem
	I0819 04:13:50.343208    3949 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19476-967/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19476-967/.minikube/cert.pem (1123 bytes)
	I0819 04:13:50.343319    3949 exec_runner.go:144] found /Users/jenkins/minikube-integration/19476-967/.minikube/key.pem, removing ...
	I0819 04:13:50.343322    3949 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19476-967/.minikube/key.pem
	I0819 04:13:50.343370    3949 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19476-967/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19476-967/.minikube/key.pem (1675 bytes)
	I0819 04:13:50.343468    3949 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19476-967/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-079000 san=[127.0.0.1 localhost minikube running-upgrade-079000]
	I0819 04:13:50.533556    3949 provision.go:177] copyRemoteCerts
	I0819 04:13:50.533603    3949 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 04:13:50.533612    3949 sshutil.go:53] new ssh client: &{IP:localhost Port:50232 SSHKeyPath:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/running-upgrade-079000/id_rsa Username:docker}
	I0819 04:13:50.561798    3949 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 04:13:50.568433    3949 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0819 04:13:50.575094    3949 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 04:13:50.582110    3949 provision.go:87] duration metric: took 239.33375ms to configureAuth
	I0819 04:13:50.582119    3949 buildroot.go:189] setting minikube options for container-runtime
	I0819 04:13:50.582205    3949 config.go:182] Loaded profile config "running-upgrade-079000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 04:13:50.582242    3949 main.go:141] libmachine: Using SSH client type: native
	I0819 04:13:50.582331    3949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104dd85a0] 0x104ddae00 <nil>  [] 0s} localhost 50232 <nil> <nil>}
	I0819 04:13:50.582335    3949 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 04:13:50.631425    3949 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 04:13:50.631435    3949 buildroot.go:70] root file system type: tmpfs
	I0819 04:13:50.631481    3949 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 04:13:50.631528    3949 main.go:141] libmachine: Using SSH client type: native
	I0819 04:13:50.631638    3949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104dd85a0] 0x104ddae00 <nil>  [] 0s} localhost 50232 <nil> <nil>}
	I0819 04:13:50.631670    3949 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 04:13:50.681515    3949 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 04:13:50.681571    3949 main.go:141] libmachine: Using SSH client type: native
	I0819 04:13:50.681686    3949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104dd85a0] 0x104ddae00 <nil>  [] 0s} localhost 50232 <nil> <nil>}
	I0819 04:13:50.681704    3949 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 04:13:50.728803    3949 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 04:13:50.728814    3949 machine.go:96] duration metric: took 534.017541ms to provisionDockerMachine
	I0819 04:13:50.728820    3949 start.go:293] postStartSetup for "running-upgrade-079000" (driver="qemu2")
	I0819 04:13:50.728826    3949 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 04:13:50.728875    3949 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 04:13:50.728884    3949 sshutil.go:53] new ssh client: &{IP:localhost Port:50232 SSHKeyPath:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/running-upgrade-079000/id_rsa Username:docker}
	I0819 04:13:50.754362    3949 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 04:13:50.755622    3949 info.go:137] Remote host: Buildroot 2021.02.12
	I0819 04:13:50.755633    3949 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19476-967/.minikube/addons for local assets ...
	I0819 04:13:50.755707    3949 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19476-967/.minikube/files for local assets ...
	I0819 04:13:50.755842    3949 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19476-967/.minikube/files/etc/ssl/certs/14342.pem -> 14342.pem in /etc/ssl/certs
	I0819 04:13:50.755968    3949 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 04:13:50.758826    3949 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/files/etc/ssl/certs/14342.pem --> /etc/ssl/certs/14342.pem (1708 bytes)
	I0819 04:13:50.766027    3949 start.go:296] duration metric: took 37.200542ms for postStartSetup
	I0819 04:13:50.766041    3949 fix.go:56] duration metric: took 581.858041ms for fixHost
	I0819 04:13:50.766077    3949 main.go:141] libmachine: Using SSH client type: native
	I0819 04:13:50.766187    3949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104dd85a0] 0x104ddae00 <nil>  [] 0s} localhost 50232 <nil> <nil>}
	I0819 04:13:50.766194    3949 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 04:13:50.813174    3949 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724066030.531248472
	
	I0819 04:13:50.813181    3949 fix.go:216] guest clock: 1724066030.531248472
	I0819 04:13:50.813185    3949 fix.go:229] Guest: 2024-08-19 04:13:50.531248472 -0700 PDT Remote: 2024-08-19 04:13:50.766044 -0700 PDT m=+0.697612293 (delta=-234.795528ms)
	I0819 04:13:50.813200    3949 fix.go:200] guest clock delta is within tolerance: -234.795528ms
	I0819 04:13:50.813204    3949 start.go:83] releasing machines lock for "running-upgrade-079000", held for 629.025209ms
	I0819 04:13:50.813260    3949 ssh_runner.go:195] Run: cat /version.json
	I0819 04:13:50.813270    3949 sshutil.go:53] new ssh client: &{IP:localhost Port:50232 SSHKeyPath:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/running-upgrade-079000/id_rsa Username:docker}
	I0819 04:13:50.813260    3949 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 04:13:50.813309    3949 sshutil.go:53] new ssh client: &{IP:localhost Port:50232 SSHKeyPath:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/running-upgrade-079000/id_rsa Username:docker}
	W0819 04:13:50.813826    3949 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:50340->127.0.0.1:50232: write: broken pipe
	I0819 04:13:50.813842    3949 retry.go:31] will retry after 243.436377ms: ssh: handshake failed: write tcp 127.0.0.1:50340->127.0.0.1:50232: write: broken pipe
	W0819 04:13:50.837451    3949 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0819 04:13:50.837493    3949 ssh_runner.go:195] Run: systemctl --version
	I0819 04:13:50.839348    3949 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 04:13:50.841203    3949 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 04:13:50.841230    3949 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0819 04:13:50.843921    3949 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0819 04:13:50.848521    3949 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 04:13:50.848533    3949 start.go:495] detecting cgroup driver to use...
	I0819 04:13:50.848606    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 04:13:50.853509    3949 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0819 04:13:50.856551    3949 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 04:13:50.859407    3949 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 04:13:50.859428    3949 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 04:13:50.862600    3949 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 04:13:50.865967    3949 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 04:13:50.869336    3949 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 04:13:50.872555    3949 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 04:13:50.875400    3949 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 04:13:50.878297    3949 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 04:13:50.881634    3949 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 04:13:50.885130    3949 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 04:13:50.888040    3949 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 04:13:50.890804    3949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 04:13:50.978052    3949 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 04:13:50.984934    3949 start.go:495] detecting cgroup driver to use...
	I0819 04:13:50.985012    3949 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 04:13:50.993421    3949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 04:13:50.997762    3949 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 04:13:51.005147    3949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 04:13:51.010305    3949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 04:13:51.014726    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 04:13:51.020282    3949 ssh_runner.go:195] Run: which cri-dockerd
	I0819 04:13:51.021644    3949 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 04:13:51.024174    3949 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0819 04:13:51.029310    3949 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 04:13:51.122312    3949 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 04:13:51.216026    3949 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 04:13:51.216083    3949 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 04:13:51.221752    3949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 04:13:51.311843    3949 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 04:13:53.989489    3949 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6774965s)
	I0819 04:13:53.989558    3949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0819 04:13:53.994521    3949 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0819 04:13:54.001014    3949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 04:13:54.006022    3949 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0819 04:13:54.084149    3949 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0819 04:13:54.165166    3949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 04:13:54.247156    3949 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0819 04:13:54.253599    3949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 04:13:54.258536    3949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 04:13:54.347378    3949 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0819 04:13:54.391237    3949 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0819 04:13:54.391314    3949 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0819 04:13:54.393554    3949 start.go:563] Will wait 60s for crictl version
	I0819 04:13:54.393611    3949 ssh_runner.go:195] Run: which crictl
	I0819 04:13:54.395408    3949 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 04:13:54.407697    3949 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0819 04:13:54.407765    3949 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 04:13:54.420197    3949 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 04:13:54.440452    3949 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0819 04:13:54.440577    3949 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0819 04:13:54.442015    3949 kubeadm.go:883] updating cluster {Name:running-upgrade-079000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50264 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-079000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0819 04:13:54.442059    3949 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0819 04:13:54.442099    3949 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0819 04:13:54.453232    3949 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0819 04:13:54.453239    3949 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0819 04:13:54.453283    3949 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0819 04:13:54.456333    3949 ssh_runner.go:195] Run: which lz4
	I0819 04:13:54.457707    3949 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 04:13:54.460261    3949 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 04:13:54.460272    3949 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0819 04:13:55.364401    3949 docker.go:649] duration metric: took 906.687833ms to copy over tarball
	I0819 04:13:55.364472    3949 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 04:13:56.480501    3949 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.115961416s)
	I0819 04:13:56.480515    3949 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 04:13:56.496400    3949 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0819 04:13:56.499848    3949 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0819 04:13:56.504946    3949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 04:13:56.589033    3949 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 04:13:57.772948    3949 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.183858542s)
	I0819 04:13:57.773049    3949 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0819 04:13:57.783647    3949 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0819 04:13:57.783658    3949 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0819 04:13:57.783664    3949 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 04:13:57.787529    3949 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 04:13:57.789184    3949 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0819 04:13:57.791372    3949 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0819 04:13:57.791378    3949 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 04:13:57.792914    3949 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 04:13:57.793025    3949 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0819 04:13:57.794344    3949 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0819 04:13:57.794388    3949 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0819 04:13:57.795696    3949 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 04:13:57.795892    3949 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0819 04:13:57.797163    3949 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 04:13:57.797224    3949 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0819 04:13:57.798206    3949 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0819 04:13:57.798258    3949 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0819 04:13:57.799685    3949 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 04:13:57.799752    3949 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0819 04:13:58.185064    3949 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0819 04:13:58.198188    3949 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0819 04:13:58.198220    3949 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0819 04:13:58.198277    3949 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0819 04:13:58.209740    3949 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0819 04:13:58.220641    3949 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0819 04:13:58.227788    3949 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 04:13:58.231595    3949 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0819 04:13:58.231613    3949 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0819 04:13:58.231659    3949 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0819 04:13:58.242395    3949 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0819 04:13:58.244891    3949 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0819 04:13:58.244910    3949 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 04:13:58.244956    3949 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 04:13:58.248576    3949 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0819 04:13:58.257693    3949 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0819 04:13:58.257715    3949 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0819 04:13:58.257775    3949 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0819 04:13:58.259770    3949 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0819 04:13:58.263163    3949 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0819 04:13:58.269324    3949 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	W0819 04:13:58.275916    3949 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0819 04:13:58.276036    3949 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0819 04:13:58.277820    3949 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0819 04:13:58.277838    3949 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0819 04:13:58.277872    3949 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0819 04:13:58.283713    3949 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0819 04:13:58.289021    3949 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0819 04:13:58.289044    3949 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 04:13:58.289101    3949 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0819 04:13:58.290849    3949 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0819 04:13:58.302994    3949 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0819 04:13:58.303018    3949 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0819 04:13:58.303075    3949 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0819 04:13:58.314950    3949 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0819 04:13:58.315074    3949 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0819 04:13:58.318840    3949 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0819 04:13:58.318926    3949 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0819 04:13:58.320168    3949 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0819 04:13:58.320180    3949 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0819 04:13:58.320361    3949 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0819 04:13:58.320369    3949 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0819 04:13:58.349415    3949 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0819 04:13:58.349430    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0819 04:13:58.366211    3949 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0819 04:13:58.366322    3949 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 04:13:58.395939    3949 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0819 04:13:58.395962    3949 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0819 04:13:58.395968    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0819 04:13:58.395983    3949 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0819 04:13:58.396001    3949 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 04:13:58.396060    3949 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 04:13:58.416532    3949 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0819 04:13:58.416656    3949 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0819 04:13:58.443588    3949 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0819 04:13:58.443639    3949 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0819 04:13:58.443666    3949 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0819 04:13:58.475860    3949 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0819 04:13:58.475873    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0819 04:13:58.705160    3949 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0819 04:13:58.705205    3949 cache_images.go:92] duration metric: took 921.505875ms to LoadCachedImages
	W0819 04:13:58.705248    3949 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0819 04:13:58.705254    3949 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0819 04:13:58.705314    3949 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-079000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-079000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 04:13:58.705374    3949 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0819 04:13:58.718937    3949 cni.go:84] Creating CNI manager for ""
	I0819 04:13:58.718948    3949 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:13:58.718952    3949 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 04:13:58.718961    3949 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-079000 NodeName:running-upgrade-079000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 04:13:58.719030    3949 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-079000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 04:13:58.719087    3949 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0819 04:13:58.722490    3949 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 04:13:58.722522    3949 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 04:13:58.725852    3949 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0819 04:13:58.730974    3949 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 04:13:58.736422    3949 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0819 04:13:58.741254    3949 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0819 04:13:58.742682    3949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 04:13:58.829434    3949 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 04:13:58.834944    3949 certs.go:68] Setting up /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/running-upgrade-079000 for IP: 10.0.2.15
	I0819 04:13:58.834956    3949 certs.go:194] generating shared ca certs ...
	I0819 04:13:58.834964    3949 certs.go:226] acquiring lock for ca certs: {Name:mk0a363c308d59dcc2ce68f87ac07833cd4c8b8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:13:58.835115    3949 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19476-967/.minikube/ca.key
	I0819 04:13:58.835170    3949 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19476-967/.minikube/proxy-client-ca.key
	I0819 04:13:58.835175    3949 certs.go:256] generating profile certs ...
	I0819 04:13:58.835234    3949 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/running-upgrade-079000/client.key
	I0819 04:13:58.835250    3949 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/running-upgrade-079000/apiserver.key.fdcdaea8
	I0819 04:13:58.835262    3949 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/running-upgrade-079000/apiserver.crt.fdcdaea8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0819 04:13:58.904654    3949 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/running-upgrade-079000/apiserver.crt.fdcdaea8 ...
	I0819 04:13:58.904659    3949 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/running-upgrade-079000/apiserver.crt.fdcdaea8: {Name:mkcde8c518a5f2d5bcce6281b98a499856e7274f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:13:58.905142    3949 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/running-upgrade-079000/apiserver.key.fdcdaea8 ...
	I0819 04:13:58.905150    3949 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/running-upgrade-079000/apiserver.key.fdcdaea8: {Name:mk277d200d178595ab332529e8221267d56eec0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:13:58.905282    3949 certs.go:381] copying /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/running-upgrade-079000/apiserver.crt.fdcdaea8 -> /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/running-upgrade-079000/apiserver.crt
	I0819 04:13:58.905450    3949 certs.go:385] copying /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/running-upgrade-079000/apiserver.key.fdcdaea8 -> /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/running-upgrade-079000/apiserver.key
	I0819 04:13:58.905601    3949 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/running-upgrade-079000/proxy-client.key
	I0819 04:13:58.905732    3949 certs.go:484] found cert: /Users/jenkins/minikube-integration/19476-967/.minikube/certs/1434.pem (1338 bytes)
	W0819 04:13:58.905760    3949 certs.go:480] ignoring /Users/jenkins/minikube-integration/19476-967/.minikube/certs/1434_empty.pem, impossibly tiny 0 bytes
	I0819 04:13:58.905764    3949 certs.go:484] found cert: /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 04:13:58.905783    3949 certs.go:484] found cert: /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem (1078 bytes)
	I0819 04:13:58.905801    3949 certs.go:484] found cert: /Users/jenkins/minikube-integration/19476-967/.minikube/certs/cert.pem (1123 bytes)
	I0819 04:13:58.905820    3949 certs.go:484] found cert: /Users/jenkins/minikube-integration/19476-967/.minikube/certs/key.pem (1675 bytes)
	I0819 04:13:58.905859    3949 certs.go:484] found cert: /Users/jenkins/minikube-integration/19476-967/.minikube/files/etc/ssl/certs/14342.pem (1708 bytes)
	I0819 04:13:58.906174    3949 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 04:13:58.913091    3949 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0819 04:13:58.920550    3949 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 04:13:58.927930    3949 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 04:13:58.935535    3949 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/running-upgrade-079000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0819 04:13:58.942536    3949 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/running-upgrade-079000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 04:13:58.949091    3949 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/running-upgrade-079000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 04:13:58.955954    3949 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/running-upgrade-079000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 04:13:58.963303    3949 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/files/etc/ssl/certs/14342.pem --> /usr/share/ca-certificates/14342.pem (1708 bytes)
	I0819 04:13:58.970745    3949 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 04:13:58.977700    3949 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/certs/1434.pem --> /usr/share/ca-certificates/1434.pem (1338 bytes)
	I0819 04:13:58.984331    3949 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 04:13:58.989293    3949 ssh_runner.go:195] Run: openssl version
	I0819 04:13:58.990953    3949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14342.pem && ln -fs /usr/share/ca-certificates/14342.pem /etc/ssl/certs/14342.pem"
	I0819 04:13:58.994260    3949 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14342.pem
	I0819 04:13:58.995712    3949 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 10:42 /usr/share/ca-certificates/14342.pem
	I0819 04:13:58.995734    3949 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14342.pem
	I0819 04:13:58.997569    3949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14342.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 04:13:59.000108    3949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 04:13:59.003115    3949 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 04:13:59.004660    3949 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 10:35 /usr/share/ca-certificates/minikubeCA.pem
	I0819 04:13:59.004682    3949 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 04:13:59.006498    3949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 04:13:59.009196    3949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1434.pem && ln -fs /usr/share/ca-certificates/1434.pem /etc/ssl/certs/1434.pem"
	I0819 04:13:59.011972    3949 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1434.pem
	I0819 04:13:59.013482    3949 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 10:42 /usr/share/ca-certificates/1434.pem
	I0819 04:13:59.013507    3949 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1434.pem
	I0819 04:13:59.015318    3949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1434.pem /etc/ssl/certs/51391683.0"
	I0819 04:13:59.018316    3949 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 04:13:59.019761    3949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 04:13:59.021515    3949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 04:13:59.023386    3949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 04:13:59.025129    3949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 04:13:59.027068    3949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 04:13:59.028860    3949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 04:13:59.030678    3949 kubeadm.go:392] StartCluster: {Name:running-upgrade-079000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50264 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-079000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0819 04:13:59.030740    3949 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0819 04:13:59.041047    3949 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 04:13:59.044081    3949 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 04:13:59.044086    3949 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 04:13:59.044107    3949 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 04:13:59.047101    3949 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 04:13:59.047339    3949 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-079000" does not appear in /Users/jenkins/minikube-integration/19476-967/kubeconfig
	I0819 04:13:59.047389    3949 kubeconfig.go:62] /Users/jenkins/minikube-integration/19476-967/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-079000" cluster setting kubeconfig missing "running-upgrade-079000" context setting]
	I0819 04:13:59.047526    3949 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19476-967/kubeconfig: {Name:mkcc8b27cbda2ef567c4911aa335c1e1951a7d2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:13:59.048187    3949 kapi.go:59] client config for running-upgrade-079000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19476-967/.minikube/profiles/running-upgrade-079000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19476-967/.minikube/profiles/running-upgrade-079000/client.key", CAFile:"/Users/jenkins/minikube-integration/19476-967/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106391610), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 04:13:59.048504    3949 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 04:13:59.051528    3949 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-079000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0819 04:13:59.051534    3949 kubeadm.go:1160] stopping kube-system containers ...
	I0819 04:13:59.051574    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0819 04:13:59.062952    3949 docker.go:483] Stopping containers: [a9af250ac798 99601354eb09 022ea700d2ef 2a5402fbebac cea274700c6b fcadb869ae9b a04863c7df9e a62be228f298 b19a94fd47ab 76962930ca4b 75c3286a1782 24ac12b6ab91 82e016e3639d c5787c354274]
	I0819 04:13:59.063015    3949 ssh_runner.go:195] Run: docker stop a9af250ac798 99601354eb09 022ea700d2ef 2a5402fbebac cea274700c6b fcadb869ae9b a04863c7df9e a62be228f298 b19a94fd47ab 76962930ca4b 75c3286a1782 24ac12b6ab91 82e016e3639d c5787c354274
	I0819 04:13:59.074614    3949 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 04:13:59.167208    3949 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 04:13:59.171551    3949 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Aug 19 11:13 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Aug 19 11:13 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Aug 19 11:13 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Aug 19 11:13 /etc/kubernetes/scheduler.conf
	
	I0819 04:13:59.171581    3949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50264 /etc/kubernetes/admin.conf
	I0819 04:13:59.174794    3949 kubeadm.go:163] "https://control-plane.minikube.internal:50264" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50264 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0819 04:13:59.174820    3949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 04:13:59.178127    3949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50264 /etc/kubernetes/kubelet.conf
	I0819 04:13:59.181196    3949 kubeadm.go:163] "https://control-plane.minikube.internal:50264" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50264 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0819 04:13:59.181222    3949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 04:13:59.184676    3949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50264 /etc/kubernetes/controller-manager.conf
	I0819 04:13:59.187842    3949 kubeadm.go:163] "https://control-plane.minikube.internal:50264" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50264 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0819 04:13:59.187869    3949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 04:13:59.190817    3949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50264 /etc/kubernetes/scheduler.conf
	I0819 04:13:59.193357    3949 kubeadm.go:163] "https://control-plane.minikube.internal:50264" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50264 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0819 04:13:59.193380    3949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 04:13:59.196355    3949 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 04:13:59.199627    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 04:13:59.228920    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 04:13:59.772572    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 04:13:59.961566    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 04:13:59.989398    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 04:14:00.015043    3949 api_server.go:52] waiting for apiserver process to appear ...
	I0819 04:14:00.015118    3949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 04:14:00.517225    3949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 04:14:01.017204    3949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 04:14:01.021631    3949 api_server.go:72] duration metric: took 1.006564208s to wait for apiserver process to appear ...
	I0819 04:14:01.021642    3949 api_server.go:88] waiting for apiserver healthz status ...
	I0819 04:14:01.021657    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:14:06.023835    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:14:06.023865    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:14:11.024297    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:14:11.024399    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:14:16.025376    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:14:16.025463    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:14:21.026636    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:14:21.026709    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:14:26.028095    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:14:26.028186    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:14:31.029975    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:14:31.030103    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:14:36.032357    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:14:36.032475    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:14:41.033993    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:14:41.034066    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:14:46.035838    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:14:46.035927    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:14:51.038100    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:14:51.038182    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:14:56.040914    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:14:56.040987    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:15:01.043641    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:15:01.044088    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:15:01.084152    3949 logs.go:276] 2 containers: [e6e08462a43e 82e016e3639d]
	I0819 04:15:01.084289    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:15:01.108159    3949 logs.go:276] 2 containers: [124abd52fd44 cea274700c6b]
	I0819 04:15:01.108298    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:15:01.123564    3949 logs.go:276] 1 containers: [086adbfeded2]
	I0819 04:15:01.123647    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:15:01.135721    3949 logs.go:276] 2 containers: [6362a51486fb b19a94fd47ab]
	I0819 04:15:01.135794    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:15:01.146444    3949 logs.go:276] 1 containers: [9f601f76c443]
	I0819 04:15:01.146512    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:15:01.157505    3949 logs.go:276] 2 containers: [19fa56b6b5d8 fcadb869ae9b]
	I0819 04:15:01.157587    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:15:01.167926    3949 logs.go:276] 0 containers: []
	W0819 04:15:01.167938    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:15:01.168000    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:15:01.178274    3949 logs.go:276] 2 containers: [0d999e2f9c91 f2aeab8371d3]
	I0819 04:15:01.178289    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:15:01.178294    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:15:01.204089    3949 logs.go:123] Gathering logs for etcd [cea274700c6b] ...
	I0819 04:15:01.204095    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cea274700c6b"
	I0819 04:15:01.222928    3949 logs.go:123] Gathering logs for coredns [086adbfeded2] ...
	I0819 04:15:01.222942    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086adbfeded2"
	I0819 04:15:01.234585    3949 logs.go:123] Gathering logs for kube-proxy [9f601f76c443] ...
	I0819 04:15:01.234595    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f601f76c443"
	I0819 04:15:01.248287    3949 logs.go:123] Gathering logs for kube-scheduler [6362a51486fb] ...
	I0819 04:15:01.248301    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6362a51486fb"
	I0819 04:15:01.260159    3949 logs.go:123] Gathering logs for kube-scheduler [b19a94fd47ab] ...
	I0819 04:15:01.260170    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b19a94fd47ab"
	I0819 04:15:01.275511    3949 logs.go:123] Gathering logs for kube-controller-manager [19fa56b6b5d8] ...
	I0819 04:15:01.275521    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fa56b6b5d8"
	I0819 04:15:01.291946    3949 logs.go:123] Gathering logs for storage-provisioner [f2aeab8371d3] ...
	I0819 04:15:01.291955    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2aeab8371d3"
	I0819 04:15:01.302760    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:15:01.302774    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:15:01.307430    3949 logs.go:123] Gathering logs for kube-apiserver [82e016e3639d] ...
	I0819 04:15:01.307436    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82e016e3639d"
	I0819 04:15:01.327766    3949 logs.go:123] Gathering logs for kube-apiserver [e6e08462a43e] ...
	I0819 04:15:01.327776    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e08462a43e"
	I0819 04:15:01.347345    3949 logs.go:123] Gathering logs for etcd [124abd52fd44] ...
	I0819 04:15:01.347358    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124abd52fd44"
	I0819 04:15:01.364013    3949 logs.go:123] Gathering logs for kube-controller-manager [fcadb869ae9b] ...
	I0819 04:15:01.364025    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcadb869ae9b"
	I0819 04:15:01.376050    3949 logs.go:123] Gathering logs for storage-provisioner [0d999e2f9c91] ...
	I0819 04:15:01.376060    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d999e2f9c91"
	I0819 04:15:01.386972    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:15:01.386982    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:15:01.398518    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:15:01.398531    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:15:01.435276    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:15:01.435287    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:15:04.011510    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:15:09.014945    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:15:09.015322    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:15:09.049892    3949 logs.go:276] 2 containers: [e6e08462a43e 82e016e3639d]
	I0819 04:15:09.050027    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:15:09.074205    3949 logs.go:276] 2 containers: [124abd52fd44 cea274700c6b]
	I0819 04:15:09.074300    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:15:09.088173    3949 logs.go:276] 1 containers: [086adbfeded2]
	I0819 04:15:09.088243    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:15:09.103791    3949 logs.go:276] 2 containers: [6362a51486fb b19a94fd47ab]
	I0819 04:15:09.103856    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:15:09.114347    3949 logs.go:276] 1 containers: [9f601f76c443]
	I0819 04:15:09.114413    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:15:09.125376    3949 logs.go:276] 2 containers: [19fa56b6b5d8 fcadb869ae9b]
	I0819 04:15:09.125446    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:15:09.135567    3949 logs.go:276] 0 containers: []
	W0819 04:15:09.135577    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:15:09.135630    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:15:09.146434    3949 logs.go:276] 2 containers: [0d999e2f9c91 f2aeab8371d3]
	I0819 04:15:09.146451    3949 logs.go:123] Gathering logs for kube-apiserver [e6e08462a43e] ...
	I0819 04:15:09.146457    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e08462a43e"
	I0819 04:15:09.160617    3949 logs.go:123] Gathering logs for kube-scheduler [6362a51486fb] ...
	I0819 04:15:09.160631    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6362a51486fb"
	I0819 04:15:09.172540    3949 logs.go:123] Gathering logs for kube-proxy [9f601f76c443] ...
	I0819 04:15:09.172551    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f601f76c443"
	I0819 04:15:09.184551    3949 logs.go:123] Gathering logs for kube-controller-manager [fcadb869ae9b] ...
	I0819 04:15:09.184563    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcadb869ae9b"
	I0819 04:15:09.196541    3949 logs.go:123] Gathering logs for etcd [124abd52fd44] ...
	I0819 04:15:09.196553    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124abd52fd44"
	I0819 04:15:09.212619    3949 logs.go:123] Gathering logs for etcd [cea274700c6b] ...
	I0819 04:15:09.212632    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cea274700c6b"
	I0819 04:15:09.232268    3949 logs.go:123] Gathering logs for kube-scheduler [b19a94fd47ab] ...
	I0819 04:15:09.232279    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b19a94fd47ab"
	I0819 04:15:09.247183    3949 logs.go:123] Gathering logs for coredns [086adbfeded2] ...
	I0819 04:15:09.247192    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086adbfeded2"
	I0819 04:15:09.258496    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:15:09.258510    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:15:09.284479    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:15:09.284488    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:15:09.322139    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:15:09.322150    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:15:09.326430    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:15:09.326437    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:15:09.360828    3949 logs.go:123] Gathering logs for kube-apiserver [82e016e3639d] ...
	I0819 04:15:09.360842    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82e016e3639d"
	I0819 04:15:09.381033    3949 logs.go:123] Gathering logs for kube-controller-manager [19fa56b6b5d8] ...
	I0819 04:15:09.381050    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fa56b6b5d8"
	I0819 04:15:09.397723    3949 logs.go:123] Gathering logs for storage-provisioner [0d999e2f9c91] ...
	I0819 04:15:09.397733    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d999e2f9c91"
	I0819 04:15:09.412786    3949 logs.go:123] Gathering logs for storage-provisioner [f2aeab8371d3] ...
	I0819 04:15:09.412797    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2aeab8371d3"
	I0819 04:15:09.428058    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:15:09.428067    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:15:11.943596    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:15:16.946175    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:15:16.946595    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:15:16.987211    3949 logs.go:276] 2 containers: [e6e08462a43e 82e016e3639d]
	I0819 04:15:16.987372    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:15:17.009931    3949 logs.go:276] 2 containers: [124abd52fd44 cea274700c6b]
	I0819 04:15:17.010042    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:15:17.025545    3949 logs.go:276] 1 containers: [086adbfeded2]
	I0819 04:15:17.025625    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:15:17.040725    3949 logs.go:276] 2 containers: [6362a51486fb b19a94fd47ab]
	I0819 04:15:17.040805    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:15:17.051085    3949 logs.go:276] 1 containers: [9f601f76c443]
	I0819 04:15:17.051158    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:15:17.062193    3949 logs.go:276] 2 containers: [19fa56b6b5d8 fcadb869ae9b]
	I0819 04:15:17.062282    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:15:17.072283    3949 logs.go:276] 0 containers: []
	W0819 04:15:17.072297    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:15:17.072354    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:15:17.083542    3949 logs.go:276] 2 containers: [0d999e2f9c91 f2aeab8371d3]
	I0819 04:15:17.083566    3949 logs.go:123] Gathering logs for etcd [124abd52fd44] ...
	I0819 04:15:17.083571    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124abd52fd44"
	I0819 04:15:17.097671    3949 logs.go:123] Gathering logs for coredns [086adbfeded2] ...
	I0819 04:15:17.097682    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086adbfeded2"
	I0819 04:15:17.109530    3949 logs.go:123] Gathering logs for kube-controller-manager [fcadb869ae9b] ...
	I0819 04:15:17.109541    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcadb869ae9b"
	I0819 04:15:17.120993    3949 logs.go:123] Gathering logs for storage-provisioner [f2aeab8371d3] ...
	I0819 04:15:17.121006    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2aeab8371d3"
	I0819 04:15:17.132308    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:15:17.132320    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:15:17.156696    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:15:17.156706    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:15:17.160808    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:15:17.160813    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:15:17.200232    3949 logs.go:123] Gathering logs for kube-apiserver [e6e08462a43e] ...
	I0819 04:15:17.200246    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e08462a43e"
	I0819 04:15:17.215937    3949 logs.go:123] Gathering logs for etcd [cea274700c6b] ...
	I0819 04:15:17.215949    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cea274700c6b"
	I0819 04:15:17.234081    3949 logs.go:123] Gathering logs for kube-scheduler [6362a51486fb] ...
	I0819 04:15:17.234095    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6362a51486fb"
	I0819 04:15:17.246246    3949 logs.go:123] Gathering logs for kube-scheduler [b19a94fd47ab] ...
	I0819 04:15:17.246257    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b19a94fd47ab"
	I0819 04:15:17.261162    3949 logs.go:123] Gathering logs for storage-provisioner [0d999e2f9c91] ...
	I0819 04:15:17.261170    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d999e2f9c91"
	I0819 04:15:17.272715    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:15:17.272725    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:15:17.285032    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:15:17.285043    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:15:17.320229    3949 logs.go:123] Gathering logs for kube-apiserver [82e016e3639d] ...
	I0819 04:15:17.320238    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82e016e3639d"
	I0819 04:15:17.339686    3949 logs.go:123] Gathering logs for kube-proxy [9f601f76c443] ...
	I0819 04:15:17.339700    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f601f76c443"
	I0819 04:15:17.351086    3949 logs.go:123] Gathering logs for kube-controller-manager [19fa56b6b5d8] ...
	I0819 04:15:17.351095    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fa56b6b5d8"
	I0819 04:15:19.880765    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:15:24.883588    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:15:24.884073    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:15:24.939561    3949 logs.go:276] 2 containers: [e6e08462a43e 82e016e3639d]
	I0819 04:15:24.939695    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:15:24.967985    3949 logs.go:276] 2 containers: [124abd52fd44 cea274700c6b]
	I0819 04:15:24.968057    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:15:24.979616    3949 logs.go:276] 1 containers: [086adbfeded2]
	I0819 04:15:24.979678    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:15:24.990037    3949 logs.go:276] 2 containers: [6362a51486fb b19a94fd47ab]
	I0819 04:15:24.990108    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:15:25.000776    3949 logs.go:276] 1 containers: [9f601f76c443]
	I0819 04:15:25.000844    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:15:25.011803    3949 logs.go:276] 2 containers: [19fa56b6b5d8 fcadb869ae9b]
	I0819 04:15:25.011868    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:15:25.022445    3949 logs.go:276] 0 containers: []
	W0819 04:15:25.022459    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:15:25.022515    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:15:25.032714    3949 logs.go:276] 2 containers: [0d999e2f9c91 f2aeab8371d3]
	I0819 04:15:25.032732    3949 logs.go:123] Gathering logs for kube-apiserver [e6e08462a43e] ...
	I0819 04:15:25.032737    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e08462a43e"
	I0819 04:15:25.047015    3949 logs.go:123] Gathering logs for coredns [086adbfeded2] ...
	I0819 04:15:25.047028    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086adbfeded2"
	I0819 04:15:25.058469    3949 logs.go:123] Gathering logs for kube-controller-manager [fcadb869ae9b] ...
	I0819 04:15:25.058482    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcadb869ae9b"
	I0819 04:15:25.069861    3949 logs.go:123] Gathering logs for storage-provisioner [f2aeab8371d3] ...
	I0819 04:15:25.069873    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2aeab8371d3"
	I0819 04:15:25.081051    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:15:25.081063    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:15:25.085442    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:15:25.085450    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:15:25.119886    3949 logs.go:123] Gathering logs for kube-scheduler [6362a51486fb] ...
	I0819 04:15:25.119900    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6362a51486fb"
	I0819 04:15:25.131685    3949 logs.go:123] Gathering logs for kube-proxy [9f601f76c443] ...
	I0819 04:15:25.131698    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f601f76c443"
	I0819 04:15:25.143489    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:15:25.143502    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:15:25.155806    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:15:25.155816    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:15:25.191250    3949 logs.go:123] Gathering logs for kube-apiserver [82e016e3639d] ...
	I0819 04:15:25.191264    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82e016e3639d"
	I0819 04:15:25.210914    3949 logs.go:123] Gathering logs for kube-scheduler [b19a94fd47ab] ...
	I0819 04:15:25.210927    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b19a94fd47ab"
	I0819 04:15:25.225736    3949 logs.go:123] Gathering logs for kube-controller-manager [19fa56b6b5d8] ...
	I0819 04:15:25.225748    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fa56b6b5d8"
	I0819 04:15:25.245908    3949 logs.go:123] Gathering logs for storage-provisioner [0d999e2f9c91] ...
	I0819 04:15:25.245922    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d999e2f9c91"
	I0819 04:15:25.257706    3949 logs.go:123] Gathering logs for etcd [124abd52fd44] ...
	I0819 04:15:25.257717    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124abd52fd44"
	I0819 04:15:25.271207    3949 logs.go:123] Gathering logs for etcd [cea274700c6b] ...
	I0819 04:15:25.271219    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cea274700c6b"
	I0819 04:15:25.288873    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:15:25.288883    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:15:27.816319    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:15:32.819063    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:15:32.819534    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:15:32.852929    3949 logs.go:276] 2 containers: [e6e08462a43e 82e016e3639d]
	I0819 04:15:32.853065    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:15:32.872448    3949 logs.go:276] 2 containers: [124abd52fd44 cea274700c6b]
	I0819 04:15:32.872552    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:15:32.886961    3949 logs.go:276] 1 containers: [086adbfeded2]
	I0819 04:15:32.887036    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:15:32.899056    3949 logs.go:276] 2 containers: [6362a51486fb b19a94fd47ab]
	I0819 04:15:32.899157    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:15:32.910370    3949 logs.go:276] 1 containers: [9f601f76c443]
	I0819 04:15:32.910441    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:15:32.921389    3949 logs.go:276] 2 containers: [19fa56b6b5d8 fcadb869ae9b]
	I0819 04:15:32.921459    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:15:32.932028    3949 logs.go:276] 0 containers: []
	W0819 04:15:32.932039    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:15:32.932093    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:15:32.942768    3949 logs.go:276] 2 containers: [0d999e2f9c91 f2aeab8371d3]
	I0819 04:15:32.942790    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:15:32.942796    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:15:32.980512    3949 logs.go:123] Gathering logs for coredns [086adbfeded2] ...
	I0819 04:15:32.980522    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086adbfeded2"
	I0819 04:15:32.995153    3949 logs.go:123] Gathering logs for kube-scheduler [6362a51486fb] ...
	I0819 04:15:32.995166    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6362a51486fb"
	I0819 04:15:33.007016    3949 logs.go:123] Gathering logs for storage-provisioner [0d999e2f9c91] ...
	I0819 04:15:33.007028    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d999e2f9c91"
	I0819 04:15:33.018739    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:15:33.018751    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:15:33.023658    3949 logs.go:123] Gathering logs for kube-apiserver [e6e08462a43e] ...
	I0819 04:15:33.023667    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e08462a43e"
	I0819 04:15:33.037712    3949 logs.go:123] Gathering logs for storage-provisioner [f2aeab8371d3] ...
	I0819 04:15:33.037725    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2aeab8371d3"
	I0819 04:15:33.050663    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:15:33.050678    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:15:33.077018    3949 logs.go:123] Gathering logs for kube-controller-manager [19fa56b6b5d8] ...
	I0819 04:15:33.077031    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fa56b6b5d8"
	I0819 04:15:33.094601    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:15:33.094610    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:15:33.107344    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:15:33.107354    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:15:33.142194    3949 logs.go:123] Gathering logs for kube-apiserver [82e016e3639d] ...
	I0819 04:15:33.142206    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82e016e3639d"
	I0819 04:15:33.162824    3949 logs.go:123] Gathering logs for etcd [124abd52fd44] ...
	I0819 04:15:33.162833    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124abd52fd44"
	I0819 04:15:33.182244    3949 logs.go:123] Gathering logs for etcd [cea274700c6b] ...
	I0819 04:15:33.182253    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cea274700c6b"
	I0819 04:15:33.199371    3949 logs.go:123] Gathering logs for kube-scheduler [b19a94fd47ab] ...
	I0819 04:15:33.199382    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b19a94fd47ab"
	I0819 04:15:33.214465    3949 logs.go:123] Gathering logs for kube-proxy [9f601f76c443] ...
	I0819 04:15:33.214478    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f601f76c443"
	I0819 04:15:33.226646    3949 logs.go:123] Gathering logs for kube-controller-manager [fcadb869ae9b] ...
	I0819 04:15:33.226657    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcadb869ae9b"
	I0819 04:15:35.740441    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:15:40.742368    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:15:40.742820    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:15:40.783997    3949 logs.go:276] 2 containers: [e6e08462a43e 82e016e3639d]
	I0819 04:15:40.784140    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:15:40.805384    3949 logs.go:276] 2 containers: [124abd52fd44 cea274700c6b]
	I0819 04:15:40.805506    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:15:40.820466    3949 logs.go:276] 1 containers: [086adbfeded2]
	I0819 04:15:40.820542    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:15:40.833519    3949 logs.go:276] 2 containers: [6362a51486fb b19a94fd47ab]
	I0819 04:15:40.833591    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:15:40.844494    3949 logs.go:276] 1 containers: [9f601f76c443]
	I0819 04:15:40.844560    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:15:40.855376    3949 logs.go:276] 2 containers: [19fa56b6b5d8 fcadb869ae9b]
	I0819 04:15:40.855444    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:15:40.870168    3949 logs.go:276] 0 containers: []
	W0819 04:15:40.870178    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:15:40.870245    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:15:40.886421    3949 logs.go:276] 2 containers: [0d999e2f9c91 f2aeab8371d3]
	I0819 04:15:40.886438    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:15:40.886444    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:15:40.898358    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:15:40.898368    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:15:40.933165    3949 logs.go:123] Gathering logs for kube-controller-manager [19fa56b6b5d8] ...
	I0819 04:15:40.933178    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fa56b6b5d8"
	I0819 04:15:40.951471    3949 logs.go:123] Gathering logs for storage-provisioner [f2aeab8371d3] ...
	I0819 04:15:40.951480    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2aeab8371d3"
	I0819 04:15:40.963326    3949 logs.go:123] Gathering logs for etcd [cea274700c6b] ...
	I0819 04:15:40.963336    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cea274700c6b"
	I0819 04:15:40.980841    3949 logs.go:123] Gathering logs for coredns [086adbfeded2] ...
	I0819 04:15:40.980850    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086adbfeded2"
	I0819 04:15:40.993119    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:15:40.993133    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:15:41.020222    3949 logs.go:123] Gathering logs for kube-controller-manager [fcadb869ae9b] ...
	I0819 04:15:41.020235    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcadb869ae9b"
	I0819 04:15:41.035437    3949 logs.go:123] Gathering logs for storage-provisioner [0d999e2f9c91] ...
	I0819 04:15:41.035449    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d999e2f9c91"
	I0819 04:15:41.049099    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:15:41.049111    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:15:41.086926    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:15:41.086935    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:15:41.091297    3949 logs.go:123] Gathering logs for kube-apiserver [82e016e3639d] ...
	I0819 04:15:41.091306    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82e016e3639d"
	I0819 04:15:41.111512    3949 logs.go:123] Gathering logs for kube-scheduler [b19a94fd47ab] ...
	I0819 04:15:41.111524    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b19a94fd47ab"
	I0819 04:15:41.128624    3949 logs.go:123] Gathering logs for kube-proxy [9f601f76c443] ...
	I0819 04:15:41.128638    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f601f76c443"
	I0819 04:15:41.142246    3949 logs.go:123] Gathering logs for kube-apiserver [e6e08462a43e] ...
	I0819 04:15:41.142263    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e08462a43e"
	I0819 04:15:41.157187    3949 logs.go:123] Gathering logs for etcd [124abd52fd44] ...
	I0819 04:15:41.157196    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124abd52fd44"
	I0819 04:15:41.171281    3949 logs.go:123] Gathering logs for kube-scheduler [6362a51486fb] ...
	I0819 04:15:41.171290    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6362a51486fb"
	I0819 04:15:43.685574    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:15:48.688291    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:15:48.688474    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:15:48.704237    3949 logs.go:276] 2 containers: [e6e08462a43e 82e016e3639d]
	I0819 04:15:48.704317    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:15:48.718939    3949 logs.go:276] 2 containers: [124abd52fd44 cea274700c6b]
	I0819 04:15:48.719024    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:15:48.730934    3949 logs.go:276] 1 containers: [086adbfeded2]
	I0819 04:15:48.731015    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:15:48.742796    3949 logs.go:276] 2 containers: [6362a51486fb b19a94fd47ab]
	I0819 04:15:48.742874    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:15:48.754782    3949 logs.go:276] 1 containers: [9f601f76c443]
	I0819 04:15:48.754856    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:15:48.767238    3949 logs.go:276] 2 containers: [19fa56b6b5d8 fcadb869ae9b]
	I0819 04:15:48.767319    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:15:48.779366    3949 logs.go:276] 0 containers: []
	W0819 04:15:48.779377    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:15:48.779436    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:15:48.789966    3949 logs.go:276] 2 containers: [0d999e2f9c91 f2aeab8371d3]
	I0819 04:15:48.789986    3949 logs.go:123] Gathering logs for etcd [124abd52fd44] ...
	I0819 04:15:48.789992    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124abd52fd44"
	I0819 04:15:48.803900    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:15:48.803913    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:15:48.815803    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:15:48.815815    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:15:48.853294    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:15:48.853304    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:15:48.890700    3949 logs.go:123] Gathering logs for storage-provisioner [0d999e2f9c91] ...
	I0819 04:15:48.890710    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d999e2f9c91"
	I0819 04:15:48.902905    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:15:48.902914    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:15:48.907211    3949 logs.go:123] Gathering logs for kube-controller-manager [fcadb869ae9b] ...
	I0819 04:15:48.907219    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcadb869ae9b"
	I0819 04:15:48.919172    3949 logs.go:123] Gathering logs for kube-proxy [9f601f76c443] ...
	I0819 04:15:48.919182    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f601f76c443"
	I0819 04:15:48.930907    3949 logs.go:123] Gathering logs for etcd [cea274700c6b] ...
	I0819 04:15:48.930919    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cea274700c6b"
	I0819 04:15:48.948405    3949 logs.go:123] Gathering logs for coredns [086adbfeded2] ...
	I0819 04:15:48.948415    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086adbfeded2"
	I0819 04:15:48.959561    3949 logs.go:123] Gathering logs for kube-scheduler [6362a51486fb] ...
	I0819 04:15:48.959571    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6362a51486fb"
	I0819 04:15:48.971019    3949 logs.go:123] Gathering logs for kube-scheduler [b19a94fd47ab] ...
	I0819 04:15:48.971029    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b19a94fd47ab"
	I0819 04:15:48.985640    3949 logs.go:123] Gathering logs for kube-controller-manager [19fa56b6b5d8] ...
	I0819 04:15:48.985650    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fa56b6b5d8"
	I0819 04:15:49.002516    3949 logs.go:123] Gathering logs for storage-provisioner [f2aeab8371d3] ...
	I0819 04:15:49.002526    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2aeab8371d3"
	I0819 04:15:49.013610    3949 logs.go:123] Gathering logs for kube-apiserver [e6e08462a43e] ...
	I0819 04:15:49.013623    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e08462a43e"
	I0819 04:15:49.027298    3949 logs.go:123] Gathering logs for kube-apiserver [82e016e3639d] ...
	I0819 04:15:49.027309    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82e016e3639d"
	I0819 04:15:49.046444    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:15:49.046454    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:15:51.570909    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:15:56.573480    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:15:56.573661    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:15:56.590026    3949 logs.go:276] 2 containers: [e6e08462a43e 82e016e3639d]
	I0819 04:15:56.590112    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:15:56.603135    3949 logs.go:276] 2 containers: [124abd52fd44 cea274700c6b]
	I0819 04:15:56.603210    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:15:56.614052    3949 logs.go:276] 1 containers: [086adbfeded2]
	I0819 04:15:56.614112    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:15:56.624430    3949 logs.go:276] 2 containers: [6362a51486fb b19a94fd47ab]
	I0819 04:15:56.624502    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:15:56.634687    3949 logs.go:276] 1 containers: [9f601f76c443]
	I0819 04:15:56.634750    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:15:56.645254    3949 logs.go:276] 2 containers: [19fa56b6b5d8 fcadb869ae9b]
	I0819 04:15:56.645317    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:15:56.655098    3949 logs.go:276] 0 containers: []
	W0819 04:15:56.655108    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:15:56.655158    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:15:56.665502    3949 logs.go:276] 2 containers: [0d999e2f9c91 f2aeab8371d3]
	I0819 04:15:56.665522    3949 logs.go:123] Gathering logs for etcd [124abd52fd44] ...
	I0819 04:15:56.665527    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124abd52fd44"
	I0819 04:15:56.679083    3949 logs.go:123] Gathering logs for etcd [cea274700c6b] ...
	I0819 04:15:56.679096    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cea274700c6b"
	I0819 04:15:56.696086    3949 logs.go:123] Gathering logs for kube-scheduler [b19a94fd47ab] ...
	I0819 04:15:56.696099    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b19a94fd47ab"
	I0819 04:15:56.711096    3949 logs.go:123] Gathering logs for kube-controller-manager [fcadb869ae9b] ...
	I0819 04:15:56.711109    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcadb869ae9b"
	I0819 04:15:56.722426    3949 logs.go:123] Gathering logs for kube-apiserver [82e016e3639d] ...
	I0819 04:15:56.722437    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82e016e3639d"
	I0819 04:15:56.742660    3949 logs.go:123] Gathering logs for kube-controller-manager [19fa56b6b5d8] ...
	I0819 04:15:56.742671    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fa56b6b5d8"
	I0819 04:15:56.760123    3949 logs.go:123] Gathering logs for storage-provisioner [f2aeab8371d3] ...
	I0819 04:15:56.760132    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2aeab8371d3"
	I0819 04:15:56.771119    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:15:56.771131    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:15:56.796107    3949 logs.go:123] Gathering logs for coredns [086adbfeded2] ...
	I0819 04:15:56.796115    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086adbfeded2"
	I0819 04:15:56.807200    3949 logs.go:123] Gathering logs for kube-scheduler [6362a51486fb] ...
	I0819 04:15:56.807213    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6362a51486fb"
	I0819 04:15:56.818542    3949 logs.go:123] Gathering logs for storage-provisioner [0d999e2f9c91] ...
	I0819 04:15:56.818551    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d999e2f9c91"
	I0819 04:15:56.834733    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:15:56.834743    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:15:56.839123    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:15:56.839133    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:15:56.872478    3949 logs.go:123] Gathering logs for kube-apiserver [e6e08462a43e] ...
	I0819 04:15:56.872487    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e08462a43e"
	I0819 04:15:56.886564    3949 logs.go:123] Gathering logs for kube-proxy [9f601f76c443] ...
	I0819 04:15:56.886574    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f601f76c443"
	I0819 04:15:56.898347    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:15:56.898358    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:15:56.909927    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:15:56.909938    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:15:59.449643    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:16:04.450836    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:16:04.451048    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:16:04.466998    3949 logs.go:276] 2 containers: [e6e08462a43e 82e016e3639d]
	I0819 04:16:04.467083    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:16:04.479219    3949 logs.go:276] 2 containers: [124abd52fd44 cea274700c6b]
	I0819 04:16:04.479293    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:16:04.489857    3949 logs.go:276] 1 containers: [086adbfeded2]
	I0819 04:16:04.489929    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:16:04.507368    3949 logs.go:276] 2 containers: [6362a51486fb b19a94fd47ab]
	I0819 04:16:04.507438    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:16:04.518014    3949 logs.go:276] 1 containers: [9f601f76c443]
	I0819 04:16:04.518081    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:16:04.529091    3949 logs.go:276] 2 containers: [19fa56b6b5d8 fcadb869ae9b]
	I0819 04:16:04.529153    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:16:04.539506    3949 logs.go:276] 0 containers: []
	W0819 04:16:04.539516    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:16:04.539568    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:16:04.550543    3949 logs.go:276] 2 containers: [0d999e2f9c91 f2aeab8371d3]
	I0819 04:16:04.550565    3949 logs.go:123] Gathering logs for kube-controller-manager [19fa56b6b5d8] ...
	I0819 04:16:04.550570    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fa56b6b5d8"
	I0819 04:16:04.586219    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:16:04.586229    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:16:04.590993    3949 logs.go:123] Gathering logs for etcd [124abd52fd44] ...
	I0819 04:16:04.591003    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124abd52fd44"
	I0819 04:16:04.607706    3949 logs.go:123] Gathering logs for storage-provisioner [0d999e2f9c91] ...
	I0819 04:16:04.607719    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d999e2f9c91"
	I0819 04:16:04.620043    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:16:04.620056    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:16:04.645935    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:16:04.645945    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:16:04.661839    3949 logs.go:123] Gathering logs for kube-apiserver [e6e08462a43e] ...
	I0819 04:16:04.661850    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e08462a43e"
	I0819 04:16:04.675851    3949 logs.go:123] Gathering logs for kube-scheduler [6362a51486fb] ...
	I0819 04:16:04.675862    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6362a51486fb"
	I0819 04:16:04.710423    3949 logs.go:123] Gathering logs for etcd [cea274700c6b] ...
	I0819 04:16:04.710437    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cea274700c6b"
	I0819 04:16:04.728828    3949 logs.go:123] Gathering logs for kube-controller-manager [fcadb869ae9b] ...
	I0819 04:16:04.728839    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcadb869ae9b"
	I0819 04:16:04.740505    3949 logs.go:123] Gathering logs for storage-provisioner [f2aeab8371d3] ...
	I0819 04:16:04.740522    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2aeab8371d3"
	I0819 04:16:04.752263    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:16:04.752274    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:16:04.788305    3949 logs.go:123] Gathering logs for kube-apiserver [82e016e3639d] ...
	I0819 04:16:04.788320    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82e016e3639d"
	I0819 04:16:04.809030    3949 logs.go:123] Gathering logs for kube-scheduler [b19a94fd47ab] ...
	I0819 04:16:04.809041    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b19a94fd47ab"
	I0819 04:16:04.824621    3949 logs.go:123] Gathering logs for kube-proxy [9f601f76c443] ...
	I0819 04:16:04.824632    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f601f76c443"
	I0819 04:16:04.836309    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:16:04.836321    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:16:04.871671    3949 logs.go:123] Gathering logs for coredns [086adbfeded2] ...
	I0819 04:16:04.871681    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086adbfeded2"
	I0819 04:16:07.385287    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:16:12.387959    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:16:12.388072    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:16:12.399990    3949 logs.go:276] 2 containers: [e6e08462a43e 82e016e3639d]
	I0819 04:16:12.400061    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:16:12.416748    3949 logs.go:276] 2 containers: [124abd52fd44 cea274700c6b]
	I0819 04:16:12.416824    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:16:12.428275    3949 logs.go:276] 1 containers: [086adbfeded2]
	I0819 04:16:12.428352    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:16:12.441443    3949 logs.go:276] 2 containers: [6362a51486fb b19a94fd47ab]
	I0819 04:16:12.441514    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:16:12.458959    3949 logs.go:276] 1 containers: [9f601f76c443]
	I0819 04:16:12.459029    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:16:12.469725    3949 logs.go:276] 2 containers: [19fa56b6b5d8 fcadb869ae9b]
	I0819 04:16:12.469786    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:16:12.480278    3949 logs.go:276] 0 containers: []
	W0819 04:16:12.480289    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:16:12.480349    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:16:12.490710    3949 logs.go:276] 2 containers: [0d999e2f9c91 f2aeab8371d3]
	I0819 04:16:12.490729    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:16:12.490735    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:16:12.529250    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:16:12.529258    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:16:12.564894    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:16:12.564910    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:16:12.577589    3949 logs.go:123] Gathering logs for kube-apiserver [e6e08462a43e] ...
	I0819 04:16:12.577601    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e08462a43e"
	I0819 04:16:12.600655    3949 logs.go:123] Gathering logs for kube-apiserver [82e016e3639d] ...
	I0819 04:16:12.600666    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82e016e3639d"
	I0819 04:16:12.629029    3949 logs.go:123] Gathering logs for kube-controller-manager [19fa56b6b5d8] ...
	I0819 04:16:12.629041    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fa56b6b5d8"
	I0819 04:16:12.646683    3949 logs.go:123] Gathering logs for storage-provisioner [0d999e2f9c91] ...
	I0819 04:16:12.646692    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d999e2f9c91"
	I0819 04:16:12.658665    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:16:12.658681    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:16:12.684572    3949 logs.go:123] Gathering logs for etcd [124abd52fd44] ...
	I0819 04:16:12.684581    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124abd52fd44"
	I0819 04:16:12.698435    3949 logs.go:123] Gathering logs for etcd [cea274700c6b] ...
	I0819 04:16:12.698445    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cea274700c6b"
	I0819 04:16:12.718240    3949 logs.go:123] Gathering logs for kube-scheduler [6362a51486fb] ...
	I0819 04:16:12.718250    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6362a51486fb"
	I0819 04:16:12.730146    3949 logs.go:123] Gathering logs for kube-controller-manager [fcadb869ae9b] ...
	I0819 04:16:12.730157    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcadb869ae9b"
	I0819 04:16:12.742341    3949 logs.go:123] Gathering logs for storage-provisioner [f2aeab8371d3] ...
	I0819 04:16:12.742354    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2aeab8371d3"
	I0819 04:16:12.754177    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:16:12.754189    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:16:12.758691    3949 logs.go:123] Gathering logs for coredns [086adbfeded2] ...
	I0819 04:16:12.758698    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086adbfeded2"
	I0819 04:16:12.770349    3949 logs.go:123] Gathering logs for kube-scheduler [b19a94fd47ab] ...
	I0819 04:16:12.770383    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b19a94fd47ab"
	I0819 04:16:12.786578    3949 logs.go:123] Gathering logs for kube-proxy [9f601f76c443] ...
	I0819 04:16:12.786587    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f601f76c443"
	I0819 04:16:15.298974    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:16:20.301275    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:16:20.301512    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:16:20.320426    3949 logs.go:276] 2 containers: [e6e08462a43e 82e016e3639d]
	I0819 04:16:20.320523    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:16:20.334909    3949 logs.go:276] 2 containers: [124abd52fd44 cea274700c6b]
	I0819 04:16:20.334987    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:16:20.346526    3949 logs.go:276] 1 containers: [086adbfeded2]
	I0819 04:16:20.346597    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:16:20.357061    3949 logs.go:276] 2 containers: [6362a51486fb b19a94fd47ab]
	I0819 04:16:20.357128    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:16:20.367671    3949 logs.go:276] 1 containers: [9f601f76c443]
	I0819 04:16:20.367735    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:16:20.381115    3949 logs.go:276] 2 containers: [19fa56b6b5d8 fcadb869ae9b]
	I0819 04:16:20.381208    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:16:20.391941    3949 logs.go:276] 0 containers: []
	W0819 04:16:20.391955    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:16:20.392009    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:16:20.402364    3949 logs.go:276] 2 containers: [0d999e2f9c91 f2aeab8371d3]
	I0819 04:16:20.402383    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:16:20.402388    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:16:20.426745    3949 logs.go:123] Gathering logs for kube-scheduler [b19a94fd47ab] ...
	I0819 04:16:20.426756    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b19a94fd47ab"
	I0819 04:16:20.442309    3949 logs.go:123] Gathering logs for kube-proxy [9f601f76c443] ...
	I0819 04:16:20.442333    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f601f76c443"
	I0819 04:16:20.454360    3949 logs.go:123] Gathering logs for kube-controller-manager [19fa56b6b5d8] ...
	I0819 04:16:20.454370    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fa56b6b5d8"
	I0819 04:16:20.476055    3949 logs.go:123] Gathering logs for storage-provisioner [f2aeab8371d3] ...
	I0819 04:16:20.476065    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2aeab8371d3"
	I0819 04:16:20.487158    3949 logs.go:123] Gathering logs for etcd [cea274700c6b] ...
	I0819 04:16:20.487169    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cea274700c6b"
	I0819 04:16:20.504339    3949 logs.go:123] Gathering logs for kube-controller-manager [fcadb869ae9b] ...
	I0819 04:16:20.504352    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcadb869ae9b"
	I0819 04:16:20.519583    3949 logs.go:123] Gathering logs for storage-provisioner [0d999e2f9c91] ...
	I0819 04:16:20.519598    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d999e2f9c91"
	I0819 04:16:20.530827    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:16:20.530841    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:16:20.535219    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:16:20.535226    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:16:20.568804    3949 logs.go:123] Gathering logs for kube-scheduler [6362a51486fb] ...
	I0819 04:16:20.568818    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6362a51486fb"
	I0819 04:16:20.580675    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:16:20.580688    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:16:20.592693    3949 logs.go:123] Gathering logs for coredns [086adbfeded2] ...
	I0819 04:16:20.592703    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086adbfeded2"
	I0819 04:16:20.604568    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:16:20.604580    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:16:20.642316    3949 logs.go:123] Gathering logs for kube-apiserver [e6e08462a43e] ...
	I0819 04:16:20.642325    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e08462a43e"
	I0819 04:16:20.656432    3949 logs.go:123] Gathering logs for kube-apiserver [82e016e3639d] ...
	I0819 04:16:20.656443    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82e016e3639d"
	I0819 04:16:20.676463    3949 logs.go:123] Gathering logs for etcd [124abd52fd44] ...
	I0819 04:16:20.676474    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124abd52fd44"
	I0819 04:16:23.195636    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:16:28.197910    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:16:28.198369    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:16:28.239349    3949 logs.go:276] 2 containers: [e6e08462a43e 82e016e3639d]
	I0819 04:16:28.239504    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:16:28.260501    3949 logs.go:276] 2 containers: [124abd52fd44 cea274700c6b]
	I0819 04:16:28.260606    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:16:28.276091    3949 logs.go:276] 1 containers: [086adbfeded2]
	I0819 04:16:28.276169    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:16:28.288904    3949 logs.go:276] 2 containers: [6362a51486fb b19a94fd47ab]
	I0819 04:16:28.288983    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:16:28.299713    3949 logs.go:276] 1 containers: [9f601f76c443]
	I0819 04:16:28.299789    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:16:28.310755    3949 logs.go:276] 2 containers: [19fa56b6b5d8 fcadb869ae9b]
	I0819 04:16:28.310829    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:16:28.321026    3949 logs.go:276] 0 containers: []
	W0819 04:16:28.321039    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:16:28.321103    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:16:28.331970    3949 logs.go:276] 2 containers: [0d999e2f9c91 f2aeab8371d3]
	I0819 04:16:28.331988    3949 logs.go:123] Gathering logs for kube-controller-manager [19fa56b6b5d8] ...
	I0819 04:16:28.331993    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fa56b6b5d8"
	I0819 04:16:28.349795    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:16:28.349806    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:16:28.374421    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:16:28.374430    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:16:28.378678    3949 logs.go:123] Gathering logs for etcd [cea274700c6b] ...
	I0819 04:16:28.378685    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cea274700c6b"
	I0819 04:16:28.398397    3949 logs.go:123] Gathering logs for kube-scheduler [6362a51486fb] ...
	I0819 04:16:28.398413    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6362a51486fb"
	I0819 04:16:28.411777    3949 logs.go:123] Gathering logs for kube-proxy [9f601f76c443] ...
	I0819 04:16:28.411790    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f601f76c443"
	I0819 04:16:28.426243    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:16:28.426255    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:16:28.466582    3949 logs.go:123] Gathering logs for etcd [124abd52fd44] ...
	I0819 04:16:28.466601    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124abd52fd44"
	I0819 04:16:28.481559    3949 logs.go:123] Gathering logs for kube-scheduler [b19a94fd47ab] ...
	I0819 04:16:28.481575    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b19a94fd47ab"
	I0819 04:16:28.501776    3949 logs.go:123] Gathering logs for storage-provisioner [f2aeab8371d3] ...
	I0819 04:16:28.501788    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2aeab8371d3"
	I0819 04:16:28.516521    3949 logs.go:123] Gathering logs for kube-apiserver [82e016e3639d] ...
	I0819 04:16:28.516534    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82e016e3639d"
	I0819 04:16:28.538903    3949 logs.go:123] Gathering logs for coredns [086adbfeded2] ...
	I0819 04:16:28.538915    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086adbfeded2"
	I0819 04:16:28.550897    3949 logs.go:123] Gathering logs for kube-controller-manager [fcadb869ae9b] ...
	I0819 04:16:28.550909    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcadb869ae9b"
	I0819 04:16:28.562662    3949 logs.go:123] Gathering logs for storage-provisioner [0d999e2f9c91] ...
	I0819 04:16:28.562673    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d999e2f9c91"
	I0819 04:16:28.574962    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:16:28.574973    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:16:28.610335    3949 logs.go:123] Gathering logs for kube-apiserver [e6e08462a43e] ...
	I0819 04:16:28.610347    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e08462a43e"
	I0819 04:16:28.625446    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:16:28.625459    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:16:31.140693    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:16:36.142991    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:16:36.143457    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:16:36.185778    3949 logs.go:276] 2 containers: [e6e08462a43e 82e016e3639d]
	I0819 04:16:36.185919    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:16:36.206937    3949 logs.go:276] 2 containers: [124abd52fd44 cea274700c6b]
	I0819 04:16:36.207027    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:16:36.221937    3949 logs.go:276] 1 containers: [086adbfeded2]
	I0819 04:16:36.222011    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:16:36.234579    3949 logs.go:276] 2 containers: [6362a51486fb b19a94fd47ab]
	I0819 04:16:36.234654    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:16:36.245617    3949 logs.go:276] 1 containers: [9f601f76c443]
	I0819 04:16:36.245690    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:16:36.256618    3949 logs.go:276] 2 containers: [19fa56b6b5d8 fcadb869ae9b]
	I0819 04:16:36.256696    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:16:36.266975    3949 logs.go:276] 0 containers: []
	W0819 04:16:36.266990    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:16:36.267048    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:16:36.277259    3949 logs.go:276] 2 containers: [0d999e2f9c91 f2aeab8371d3]
	I0819 04:16:36.277276    3949 logs.go:123] Gathering logs for storage-provisioner [0d999e2f9c91] ...
	I0819 04:16:36.277281    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d999e2f9c91"
	I0819 04:16:36.288860    3949 logs.go:123] Gathering logs for storage-provisioner [f2aeab8371d3] ...
	I0819 04:16:36.288871    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2aeab8371d3"
	I0819 04:16:36.300225    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:16:36.300236    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:16:36.304910    3949 logs.go:123] Gathering logs for kube-scheduler [b19a94fd47ab] ...
	I0819 04:16:36.304916    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b19a94fd47ab"
	I0819 04:16:36.321111    3949 logs.go:123] Gathering logs for kube-controller-manager [fcadb869ae9b] ...
	I0819 04:16:36.321124    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcadb869ae9b"
	I0819 04:16:36.332286    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:16:36.332297    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:16:36.356066    3949 logs.go:123] Gathering logs for kube-apiserver [82e016e3639d] ...
	I0819 04:16:36.356081    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82e016e3639d"
	I0819 04:16:36.375367    3949 logs.go:123] Gathering logs for etcd [124abd52fd44] ...
	I0819 04:16:36.375382    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124abd52fd44"
	I0819 04:16:36.390247    3949 logs.go:123] Gathering logs for etcd [cea274700c6b] ...
	I0819 04:16:36.390257    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cea274700c6b"
	I0819 04:16:36.408092    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:16:36.408105    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:16:36.445771    3949 logs.go:123] Gathering logs for kube-apiserver [e6e08462a43e] ...
	I0819 04:16:36.445782    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e08462a43e"
	I0819 04:16:36.459739    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:16:36.459750    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:16:36.471648    3949 logs.go:123] Gathering logs for kube-proxy [9f601f76c443] ...
	I0819 04:16:36.471659    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f601f76c443"
	I0819 04:16:36.483389    3949 logs.go:123] Gathering logs for kube-controller-manager [19fa56b6b5d8] ...
	I0819 04:16:36.483401    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fa56b6b5d8"
	I0819 04:16:36.504940    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:16:36.504950    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:16:36.540624    3949 logs.go:123] Gathering logs for coredns [086adbfeded2] ...
	I0819 04:16:36.540634    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086adbfeded2"
	I0819 04:16:36.554596    3949 logs.go:123] Gathering logs for kube-scheduler [6362a51486fb] ...
	I0819 04:16:36.554607    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6362a51486fb"
	I0819 04:16:39.073372    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:16:44.075973    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:16:44.076156    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:16:44.088428    3949 logs.go:276] 2 containers: [e6e08462a43e 82e016e3639d]
	I0819 04:16:44.088520    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:16:44.099053    3949 logs.go:276] 2 containers: [124abd52fd44 cea274700c6b]
	I0819 04:16:44.099131    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:16:44.109983    3949 logs.go:276] 1 containers: [086adbfeded2]
	I0819 04:16:44.110048    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:16:44.121734    3949 logs.go:276] 2 containers: [6362a51486fb b19a94fd47ab]
	I0819 04:16:44.121805    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:16:44.132113    3949 logs.go:276] 1 containers: [9f601f76c443]
	I0819 04:16:44.132182    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:16:44.142744    3949 logs.go:276] 2 containers: [19fa56b6b5d8 fcadb869ae9b]
	I0819 04:16:44.142812    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:16:44.152974    3949 logs.go:276] 0 containers: []
	W0819 04:16:44.152985    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:16:44.153041    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:16:44.166702    3949 logs.go:276] 2 containers: [0d999e2f9c91 f2aeab8371d3]
	I0819 04:16:44.166722    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:16:44.166728    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:16:44.202792    3949 logs.go:123] Gathering logs for kube-apiserver [82e016e3639d] ...
	I0819 04:16:44.202805    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82e016e3639d"
	I0819 04:16:44.222845    3949 logs.go:123] Gathering logs for etcd [cea274700c6b] ...
	I0819 04:16:44.222855    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cea274700c6b"
	I0819 04:16:44.240792    3949 logs.go:123] Gathering logs for kube-proxy [9f601f76c443] ...
	I0819 04:16:44.240802    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f601f76c443"
	I0819 04:16:44.252671    3949 logs.go:123] Gathering logs for kube-controller-manager [fcadb869ae9b] ...
	I0819 04:16:44.252682    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcadb869ae9b"
	I0819 04:16:44.264518    3949 logs.go:123] Gathering logs for coredns [086adbfeded2] ...
	I0819 04:16:44.264535    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086adbfeded2"
	I0819 04:16:44.275741    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:16:44.275752    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:16:44.288409    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:16:44.288421    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:16:44.312335    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:16:44.312343    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:16:44.348135    3949 logs.go:123] Gathering logs for kube-apiserver [e6e08462a43e] ...
	I0819 04:16:44.348145    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e08462a43e"
	I0819 04:16:44.362344    3949 logs.go:123] Gathering logs for kube-scheduler [b19a94fd47ab] ...
	I0819 04:16:44.362357    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b19a94fd47ab"
	I0819 04:16:44.376878    3949 logs.go:123] Gathering logs for kube-controller-manager [19fa56b6b5d8] ...
	I0819 04:16:44.376890    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fa56b6b5d8"
	I0819 04:16:44.394121    3949 logs.go:123] Gathering logs for storage-provisioner [0d999e2f9c91] ...
	I0819 04:16:44.394131    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d999e2f9c91"
	I0819 04:16:44.410541    3949 logs.go:123] Gathering logs for storage-provisioner [f2aeab8371d3] ...
	I0819 04:16:44.410556    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2aeab8371d3"
	I0819 04:16:44.422018    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:16:44.422030    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:16:44.427047    3949 logs.go:123] Gathering logs for etcd [124abd52fd44] ...
	I0819 04:16:44.427054    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124abd52fd44"
	I0819 04:16:44.444758    3949 logs.go:123] Gathering logs for kube-scheduler [6362a51486fb] ...
	I0819 04:16:44.444770    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6362a51486fb"
	I0819 04:16:46.959836    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:16:51.961127    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:16:51.961498    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:16:51.991477    3949 logs.go:276] 2 containers: [e6e08462a43e 82e016e3639d]
	I0819 04:16:51.991609    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:16:52.009594    3949 logs.go:276] 2 containers: [124abd52fd44 cea274700c6b]
	I0819 04:16:52.009694    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:16:52.026043    3949 logs.go:276] 1 containers: [086adbfeded2]
	I0819 04:16:52.026112    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:16:52.038227    3949 logs.go:276] 2 containers: [6362a51486fb b19a94fd47ab]
	I0819 04:16:52.038297    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:16:52.049503    3949 logs.go:276] 1 containers: [9f601f76c443]
	I0819 04:16:52.049577    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:16:52.061011    3949 logs.go:276] 2 containers: [19fa56b6b5d8 fcadb869ae9b]
	I0819 04:16:52.061083    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:16:52.071679    3949 logs.go:276] 0 containers: []
	W0819 04:16:52.071693    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:16:52.071755    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:16:52.084849    3949 logs.go:276] 2 containers: [0d999e2f9c91 f2aeab8371d3]
	I0819 04:16:52.084866    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:16:52.084872    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:16:52.121155    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:16:52.121166    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:16:52.155433    3949 logs.go:123] Gathering logs for etcd [cea274700c6b] ...
	I0819 04:16:52.155448    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cea274700c6b"
	I0819 04:16:52.173072    3949 logs.go:123] Gathering logs for etcd [124abd52fd44] ...
	I0819 04:16:52.173085    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124abd52fd44"
	I0819 04:16:52.187354    3949 logs.go:123] Gathering logs for storage-provisioner [0d999e2f9c91] ...
	I0819 04:16:52.187366    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d999e2f9c91"
	I0819 04:16:52.199163    3949 logs.go:123] Gathering logs for kube-apiserver [82e016e3639d] ...
	I0819 04:16:52.199173    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82e016e3639d"
	I0819 04:16:52.219326    3949 logs.go:123] Gathering logs for kube-controller-manager [fcadb869ae9b] ...
	I0819 04:16:52.219337    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcadb869ae9b"
	I0819 04:16:52.231428    3949 logs.go:123] Gathering logs for storage-provisioner [f2aeab8371d3] ...
	I0819 04:16:52.231440    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2aeab8371d3"
	I0819 04:16:52.243324    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:16:52.243334    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:16:52.268571    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:16:52.268582    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:16:52.280348    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:16:52.280358    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:16:52.284901    3949 logs.go:123] Gathering logs for kube-apiserver [e6e08462a43e] ...
	I0819 04:16:52.284908    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e08462a43e"
	I0819 04:16:52.299143    3949 logs.go:123] Gathering logs for coredns [086adbfeded2] ...
	I0819 04:16:52.299154    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086adbfeded2"
	I0819 04:16:52.310746    3949 logs.go:123] Gathering logs for kube-scheduler [6362a51486fb] ...
	I0819 04:16:52.310759    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6362a51486fb"
	I0819 04:16:52.323061    3949 logs.go:123] Gathering logs for kube-scheduler [b19a94fd47ab] ...
	I0819 04:16:52.323071    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b19a94fd47ab"
	I0819 04:16:52.338507    3949 logs.go:123] Gathering logs for kube-proxy [9f601f76c443] ...
	I0819 04:16:52.338517    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f601f76c443"
	I0819 04:16:52.350592    3949 logs.go:123] Gathering logs for kube-controller-manager [19fa56b6b5d8] ...
	I0819 04:16:52.350604    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fa56b6b5d8"
	I0819 04:16:54.870125    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:16:59.872371    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:16:59.872512    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:16:59.887413    3949 logs.go:276] 2 containers: [e6e08462a43e 82e016e3639d]
	I0819 04:16:59.887486    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:16:59.901886    3949 logs.go:276] 2 containers: [124abd52fd44 cea274700c6b]
	I0819 04:16:59.901959    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:16:59.912594    3949 logs.go:276] 1 containers: [086adbfeded2]
	I0819 04:16:59.912656    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:16:59.923608    3949 logs.go:276] 2 containers: [6362a51486fb b19a94fd47ab]
	I0819 04:16:59.923670    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:16:59.940449    3949 logs.go:276] 1 containers: [9f601f76c443]
	I0819 04:16:59.940515    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:16:59.955737    3949 logs.go:276] 2 containers: [19fa56b6b5d8 fcadb869ae9b]
	I0819 04:16:59.955821    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:16:59.972172    3949 logs.go:276] 0 containers: []
	W0819 04:16:59.972183    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:16:59.972237    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:16:59.983174    3949 logs.go:276] 2 containers: [0d999e2f9c91 f2aeab8371d3]
	I0819 04:16:59.983196    3949 logs.go:123] Gathering logs for storage-provisioner [f2aeab8371d3] ...
	I0819 04:16:59.983201    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2aeab8371d3"
	I0819 04:16:59.995318    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:16:59.995329    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:17:00.031098    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:17:00.031108    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:17:00.035506    3949 logs.go:123] Gathering logs for kube-scheduler [6362a51486fb] ...
	I0819 04:17:00.035516    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6362a51486fb"
	I0819 04:17:00.047549    3949 logs.go:123] Gathering logs for kube-controller-manager [fcadb869ae9b] ...
	I0819 04:17:00.047562    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcadb869ae9b"
	I0819 04:17:00.058795    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:17:00.058807    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:17:00.071913    3949 logs.go:123] Gathering logs for etcd [124abd52fd44] ...
	I0819 04:17:00.071924    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124abd52fd44"
	I0819 04:17:00.086479    3949 logs.go:123] Gathering logs for coredns [086adbfeded2] ...
	I0819 04:17:00.086489    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086adbfeded2"
	I0819 04:17:00.098658    3949 logs.go:123] Gathering logs for kube-scheduler [b19a94fd47ab] ...
	I0819 04:17:00.098669    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b19a94fd47ab"
	I0819 04:17:00.113752    3949 logs.go:123] Gathering logs for storage-provisioner [0d999e2f9c91] ...
	I0819 04:17:00.113765    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d999e2f9c91"
	I0819 04:17:00.125217    3949 logs.go:123] Gathering logs for kube-apiserver [82e016e3639d] ...
	I0819 04:17:00.125229    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82e016e3639d"
	I0819 04:17:00.145118    3949 logs.go:123] Gathering logs for etcd [cea274700c6b] ...
	I0819 04:17:00.145131    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cea274700c6b"
	I0819 04:17:00.162537    3949 logs.go:123] Gathering logs for kube-proxy [9f601f76c443] ...
	I0819 04:17:00.162549    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f601f76c443"
	I0819 04:17:00.174556    3949 logs.go:123] Gathering logs for kube-controller-manager [19fa56b6b5d8] ...
	I0819 04:17:00.174568    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fa56b6b5d8"
	I0819 04:17:00.191366    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:17:00.191377    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:17:00.225639    3949 logs.go:123] Gathering logs for kube-apiserver [e6e08462a43e] ...
	I0819 04:17:00.225651    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e08462a43e"
	I0819 04:17:00.240645    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:17:00.240656    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:17:02.765088    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:17:07.767700    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:17:07.767899    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:17:07.779690    3949 logs.go:276] 2 containers: [e6e08462a43e 82e016e3639d]
	I0819 04:17:07.779767    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:17:07.790588    3949 logs.go:276] 2 containers: [124abd52fd44 cea274700c6b]
	I0819 04:17:07.790668    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:17:07.801725    3949 logs.go:276] 1 containers: [086adbfeded2]
	I0819 04:17:07.801801    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:17:07.815367    3949 logs.go:276] 2 containers: [6362a51486fb b19a94fd47ab]
	I0819 04:17:07.815439    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:17:07.827061    3949 logs.go:276] 1 containers: [9f601f76c443]
	I0819 04:17:07.827130    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:17:07.838455    3949 logs.go:276] 2 containers: [19fa56b6b5d8 fcadb869ae9b]
	I0819 04:17:07.838518    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:17:07.848720    3949 logs.go:276] 0 containers: []
	W0819 04:17:07.848732    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:17:07.848795    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:17:07.859544    3949 logs.go:276] 2 containers: [0d999e2f9c91 f2aeab8371d3]
	I0819 04:17:07.859563    3949 logs.go:123] Gathering logs for etcd [124abd52fd44] ...
	I0819 04:17:07.859568    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124abd52fd44"
	I0819 04:17:07.882823    3949 logs.go:123] Gathering logs for kube-scheduler [b19a94fd47ab] ...
	I0819 04:17:07.882833    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b19a94fd47ab"
	I0819 04:17:07.898112    3949 logs.go:123] Gathering logs for kube-controller-manager [fcadb869ae9b] ...
	I0819 04:17:07.898123    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcadb869ae9b"
	I0819 04:17:07.910116    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:17:07.910126    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:17:07.933826    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:17:07.933837    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:17:07.971261    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:17:07.971281    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:17:07.975838    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:17:07.975847    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:17:08.013034    3949 logs.go:123] Gathering logs for kube-apiserver [82e016e3639d] ...
	I0819 04:17:08.013046    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82e016e3639d"
	I0819 04:17:08.034976    3949 logs.go:123] Gathering logs for coredns [086adbfeded2] ...
	I0819 04:17:08.034991    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086adbfeded2"
	I0819 04:17:08.047657    3949 logs.go:123] Gathering logs for kube-controller-manager [19fa56b6b5d8] ...
	I0819 04:17:08.047669    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fa56b6b5d8"
	I0819 04:17:08.070237    3949 logs.go:123] Gathering logs for storage-provisioner [0d999e2f9c91] ...
	I0819 04:17:08.070254    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d999e2f9c91"
	I0819 04:17:08.082242    3949 logs.go:123] Gathering logs for storage-provisioner [f2aeab8371d3] ...
	I0819 04:17:08.082254    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2aeab8371d3"
	I0819 04:17:08.094278    3949 logs.go:123] Gathering logs for kube-apiserver [e6e08462a43e] ...
	I0819 04:17:08.094291    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e08462a43e"
	I0819 04:17:08.111257    3949 logs.go:123] Gathering logs for etcd [cea274700c6b] ...
	I0819 04:17:08.111267    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cea274700c6b"
	I0819 04:17:08.133450    3949 logs.go:123] Gathering logs for kube-scheduler [6362a51486fb] ...
	I0819 04:17:08.133468    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6362a51486fb"
	I0819 04:17:08.146424    3949 logs.go:123] Gathering logs for kube-proxy [9f601f76c443] ...
	I0819 04:17:08.146434    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f601f76c443"
	I0819 04:17:08.158674    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:17:08.158685    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:17:10.673840    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:17:15.676077    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:17:15.676273    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:17:15.687794    3949 logs.go:276] 2 containers: [e6e08462a43e 82e016e3639d]
	I0819 04:17:15.687877    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:17:15.699006    3949 logs.go:276] 2 containers: [124abd52fd44 cea274700c6b]
	I0819 04:17:15.699097    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:17:15.709837    3949 logs.go:276] 1 containers: [086adbfeded2]
	I0819 04:17:15.709912    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:17:15.721476    3949 logs.go:276] 2 containers: [6362a51486fb b19a94fd47ab]
	I0819 04:17:15.721544    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:17:15.731970    3949 logs.go:276] 1 containers: [9f601f76c443]
	I0819 04:17:15.732041    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:17:15.743108    3949 logs.go:276] 2 containers: [19fa56b6b5d8 fcadb869ae9b]
	I0819 04:17:15.743171    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:17:15.753866    3949 logs.go:276] 0 containers: []
	W0819 04:17:15.753877    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:17:15.753939    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:17:15.771085    3949 logs.go:276] 2 containers: [0d999e2f9c91 f2aeab8371d3]
	I0819 04:17:15.771103    3949 logs.go:123] Gathering logs for kube-apiserver [82e016e3639d] ...
	I0819 04:17:15.771108    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82e016e3639d"
	I0819 04:17:15.791120    3949 logs.go:123] Gathering logs for kube-scheduler [b19a94fd47ab] ...
	I0819 04:17:15.791130    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b19a94fd47ab"
	I0819 04:17:15.810391    3949 logs.go:123] Gathering logs for storage-provisioner [0d999e2f9c91] ...
	I0819 04:17:15.810401    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d999e2f9c91"
	I0819 04:17:15.821815    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:17:15.821826    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:17:15.826524    3949 logs.go:123] Gathering logs for kube-scheduler [6362a51486fb] ...
	I0819 04:17:15.826534    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6362a51486fb"
	I0819 04:17:15.838606    3949 logs.go:123] Gathering logs for kube-controller-manager [19fa56b6b5d8] ...
	I0819 04:17:15.838619    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fa56b6b5d8"
	I0819 04:17:15.857168    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:17:15.857178    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:17:15.894190    3949 logs.go:123] Gathering logs for kube-apiserver [e6e08462a43e] ...
	I0819 04:17:15.894198    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e08462a43e"
	I0819 04:17:15.908781    3949 logs.go:123] Gathering logs for etcd [124abd52fd44] ...
	I0819 04:17:15.908794    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124abd52fd44"
	I0819 04:17:15.922947    3949 logs.go:123] Gathering logs for storage-provisioner [f2aeab8371d3] ...
	I0819 04:17:15.922958    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2aeab8371d3"
	I0819 04:17:15.934171    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:17:15.934180    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:17:15.956959    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:17:15.956969    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:17:15.969214    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:17:15.969227    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:17:16.006675    3949 logs.go:123] Gathering logs for etcd [cea274700c6b] ...
	I0819 04:17:16.006687    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cea274700c6b"
	I0819 04:17:16.029943    3949 logs.go:123] Gathering logs for coredns [086adbfeded2] ...
	I0819 04:17:16.029953    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086adbfeded2"
	I0819 04:17:16.041792    3949 logs.go:123] Gathering logs for kube-proxy [9f601f76c443] ...
	I0819 04:17:16.041803    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f601f76c443"
	I0819 04:17:16.054558    3949 logs.go:123] Gathering logs for kube-controller-manager [fcadb869ae9b] ...
	I0819 04:17:16.054569    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcadb869ae9b"
	I0819 04:17:18.568229    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:17:23.570989    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:17:23.571247    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:17:23.597221    3949 logs.go:276] 2 containers: [e6e08462a43e 82e016e3639d]
	I0819 04:17:23.597348    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:17:23.614812    3949 logs.go:276] 2 containers: [124abd52fd44 cea274700c6b]
	I0819 04:17:23.614897    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:17:23.628467    3949 logs.go:276] 1 containers: [086adbfeded2]
	I0819 04:17:23.628544    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:17:23.639842    3949 logs.go:276] 2 containers: [6362a51486fb b19a94fd47ab]
	I0819 04:17:23.639908    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:17:23.654364    3949 logs.go:276] 1 containers: [9f601f76c443]
	I0819 04:17:23.654440    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:17:23.665207    3949 logs.go:276] 2 containers: [19fa56b6b5d8 fcadb869ae9b]
	I0819 04:17:23.665274    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:17:23.675565    3949 logs.go:276] 0 containers: []
	W0819 04:17:23.675577    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:17:23.675633    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:17:23.686006    3949 logs.go:276] 2 containers: [0d999e2f9c91 f2aeab8371d3]
	I0819 04:17:23.686027    3949 logs.go:123] Gathering logs for kube-apiserver [82e016e3639d] ...
	I0819 04:17:23.686033    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82e016e3639d"
	I0819 04:17:23.706491    3949 logs.go:123] Gathering logs for coredns [086adbfeded2] ...
	I0819 04:17:23.706505    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086adbfeded2"
	I0819 04:17:23.718197    3949 logs.go:123] Gathering logs for kube-controller-manager [19fa56b6b5d8] ...
	I0819 04:17:23.718211    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fa56b6b5d8"
	I0819 04:17:23.735271    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:17:23.735281    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:17:23.771598    3949 logs.go:123] Gathering logs for kube-scheduler [6362a51486fb] ...
	I0819 04:17:23.771610    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6362a51486fb"
	I0819 04:17:23.784384    3949 logs.go:123] Gathering logs for kube-controller-manager [fcadb869ae9b] ...
	I0819 04:17:23.784394    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcadb869ae9b"
	I0819 04:17:23.796988    3949 logs.go:123] Gathering logs for storage-provisioner [f2aeab8371d3] ...
	I0819 04:17:23.797001    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2aeab8371d3"
	I0819 04:17:23.833886    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:17:23.833898    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:17:23.859280    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:17:23.859294    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:17:23.863952    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:17:23.863962    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:17:23.898883    3949 logs.go:123] Gathering logs for storage-provisioner [0d999e2f9c91] ...
	I0819 04:17:23.898896    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d999e2f9c91"
	I0819 04:17:23.915925    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:17:23.915936    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:17:23.928778    3949 logs.go:123] Gathering logs for kube-apiserver [e6e08462a43e] ...
	I0819 04:17:23.928793    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e08462a43e"
	I0819 04:17:23.943156    3949 logs.go:123] Gathering logs for etcd [124abd52fd44] ...
	I0819 04:17:23.943167    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124abd52fd44"
	I0819 04:17:23.959445    3949 logs.go:123] Gathering logs for etcd [cea274700c6b] ...
	I0819 04:17:23.959458    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cea274700c6b"
	I0819 04:17:23.977087    3949 logs.go:123] Gathering logs for kube-scheduler [b19a94fd47ab] ...
	I0819 04:17:23.977097    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b19a94fd47ab"
	I0819 04:17:23.991754    3949 logs.go:123] Gathering logs for kube-proxy [9f601f76c443] ...
	I0819 04:17:23.991766    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f601f76c443"
	I0819 04:17:26.505397    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:17:31.507584    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:17:31.507727    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:17:31.520591    3949 logs.go:276] 2 containers: [e6e08462a43e 82e016e3639d]
	I0819 04:17:31.520667    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:17:31.531382    3949 logs.go:276] 2 containers: [124abd52fd44 cea274700c6b]
	I0819 04:17:31.531452    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:17:31.545813    3949 logs.go:276] 1 containers: [086adbfeded2]
	I0819 04:17:31.545888    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:17:31.556352    3949 logs.go:276] 2 containers: [6362a51486fb b19a94fd47ab]
	I0819 04:17:31.556414    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:17:31.567213    3949 logs.go:276] 1 containers: [9f601f76c443]
	I0819 04:17:31.567278    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:17:31.577735    3949 logs.go:276] 2 containers: [19fa56b6b5d8 fcadb869ae9b]
	I0819 04:17:31.577809    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:17:31.587693    3949 logs.go:276] 0 containers: []
	W0819 04:17:31.587704    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:17:31.587764    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:17:31.598292    3949 logs.go:276] 2 containers: [0d999e2f9c91 f2aeab8371d3]
	I0819 04:17:31.598308    3949 logs.go:123] Gathering logs for coredns [086adbfeded2] ...
	I0819 04:17:31.598314    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086adbfeded2"
	I0819 04:17:31.612120    3949 logs.go:123] Gathering logs for kube-scheduler [b19a94fd47ab] ...
	I0819 04:17:31.612134    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b19a94fd47ab"
	I0819 04:17:31.627242    3949 logs.go:123] Gathering logs for kube-proxy [9f601f76c443] ...
	I0819 04:17:31.627254    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f601f76c443"
	I0819 04:17:31.639231    3949 logs.go:123] Gathering logs for kube-apiserver [e6e08462a43e] ...
	I0819 04:17:31.639240    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e08462a43e"
	I0819 04:17:31.657967    3949 logs.go:123] Gathering logs for etcd [124abd52fd44] ...
	I0819 04:17:31.657980    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124abd52fd44"
	I0819 04:17:31.677803    3949 logs.go:123] Gathering logs for kube-scheduler [6362a51486fb] ...
	I0819 04:17:31.677816    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6362a51486fb"
	I0819 04:17:31.689818    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:17:31.689831    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:17:31.725891    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:17:31.725901    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:17:31.730105    3949 logs.go:123] Gathering logs for kube-controller-manager [19fa56b6b5d8] ...
	I0819 04:17:31.730114    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fa56b6b5d8"
	I0819 04:17:31.753505    3949 logs.go:123] Gathering logs for storage-provisioner [f2aeab8371d3] ...
	I0819 04:17:31.753529    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2aeab8371d3"
	I0819 04:17:31.767648    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:17:31.767663    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:17:31.805583    3949 logs.go:123] Gathering logs for kube-apiserver [82e016e3639d] ...
	I0819 04:17:31.805595    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82e016e3639d"
	I0819 04:17:31.830305    3949 logs.go:123] Gathering logs for storage-provisioner [0d999e2f9c91] ...
	I0819 04:17:31.830318    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d999e2f9c91"
	I0819 04:17:31.843172    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:17:31.843184    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:17:31.867154    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:17:31.867169    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:17:31.880659    3949 logs.go:123] Gathering logs for etcd [cea274700c6b] ...
	I0819 04:17:31.880672    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cea274700c6b"
	I0819 04:17:31.900807    3949 logs.go:123] Gathering logs for kube-controller-manager [fcadb869ae9b] ...
	I0819 04:17:31.900829    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcadb869ae9b"
	I0819 04:17:34.417861    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:17:39.420087    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:17:39.420222    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:17:39.436327    3949 logs.go:276] 2 containers: [e6e08462a43e 82e016e3639d]
	I0819 04:17:39.436413    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:17:39.448168    3949 logs.go:276] 2 containers: [124abd52fd44 cea274700c6b]
	I0819 04:17:39.448244    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:17:39.459054    3949 logs.go:276] 1 containers: [086adbfeded2]
	I0819 04:17:39.459130    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:17:39.470225    3949 logs.go:276] 2 containers: [6362a51486fb b19a94fd47ab]
	I0819 04:17:39.470295    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:17:39.481057    3949 logs.go:276] 1 containers: [9f601f76c443]
	I0819 04:17:39.481120    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:17:39.494210    3949 logs.go:276] 2 containers: [19fa56b6b5d8 fcadb869ae9b]
	I0819 04:17:39.494287    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:17:39.504106    3949 logs.go:276] 0 containers: []
	W0819 04:17:39.504115    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:17:39.504168    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:17:39.515020    3949 logs.go:276] 2 containers: [0d999e2f9c91 f2aeab8371d3]
	I0819 04:17:39.515036    3949 logs.go:123] Gathering logs for etcd [cea274700c6b] ...
	I0819 04:17:39.515042    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cea274700c6b"
	I0819 04:17:39.532915    3949 logs.go:123] Gathering logs for kube-scheduler [b19a94fd47ab] ...
	I0819 04:17:39.532927    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b19a94fd47ab"
	I0819 04:17:39.549300    3949 logs.go:123] Gathering logs for kube-controller-manager [19fa56b6b5d8] ...
	I0819 04:17:39.549311    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fa56b6b5d8"
	I0819 04:17:39.570608    3949 logs.go:123] Gathering logs for kube-controller-manager [fcadb869ae9b] ...
	I0819 04:17:39.570620    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcadb869ae9b"
	I0819 04:17:39.582655    3949 logs.go:123] Gathering logs for storage-provisioner [f2aeab8371d3] ...
	I0819 04:17:39.582665    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2aeab8371d3"
	I0819 04:17:39.594547    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:17:39.594559    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:17:39.599591    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:17:39.599599    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:17:39.637659    3949 logs.go:123] Gathering logs for kube-apiserver [e6e08462a43e] ...
	I0819 04:17:39.637670    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e08462a43e"
	I0819 04:17:39.652545    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:17:39.652559    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:17:39.691784    3949 logs.go:123] Gathering logs for etcd [124abd52fd44] ...
	I0819 04:17:39.691799    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124abd52fd44"
	I0819 04:17:39.705387    3949 logs.go:123] Gathering logs for kube-proxy [9f601f76c443] ...
	I0819 04:17:39.705401    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f601f76c443"
	I0819 04:17:39.716989    3949 logs.go:123] Gathering logs for storage-provisioner [0d999e2f9c91] ...
	I0819 04:17:39.717003    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d999e2f9c91"
	I0819 04:17:39.728872    3949 logs.go:123] Gathering logs for kube-apiserver [82e016e3639d] ...
	I0819 04:17:39.728885    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82e016e3639d"
	I0819 04:17:39.748706    3949 logs.go:123] Gathering logs for coredns [086adbfeded2] ...
	I0819 04:17:39.748719    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086adbfeded2"
	I0819 04:17:39.760715    3949 logs.go:123] Gathering logs for kube-scheduler [6362a51486fb] ...
	I0819 04:17:39.760728    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6362a51486fb"
	I0819 04:17:39.776524    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:17:39.776535    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:17:39.799114    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:17:39.799122    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:17:42.314087    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:17:47.316248    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:17:47.316357    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:17:47.328701    3949 logs.go:276] 2 containers: [e6e08462a43e 82e016e3639d]
	I0819 04:17:47.328783    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:17:47.339930    3949 logs.go:276] 2 containers: [124abd52fd44 cea274700c6b]
	I0819 04:17:47.340006    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:17:47.350939    3949 logs.go:276] 1 containers: [086adbfeded2]
	I0819 04:17:47.351016    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:17:47.361292    3949 logs.go:276] 2 containers: [6362a51486fb b19a94fd47ab]
	I0819 04:17:47.361368    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:17:47.372145    3949 logs.go:276] 1 containers: [9f601f76c443]
	I0819 04:17:47.372216    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:17:47.383081    3949 logs.go:276] 2 containers: [19fa56b6b5d8 fcadb869ae9b]
	I0819 04:17:47.383148    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:17:47.393388    3949 logs.go:276] 0 containers: []
	W0819 04:17:47.393399    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:17:47.393457    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:17:47.404333    3949 logs.go:276] 2 containers: [0d999e2f9c91 f2aeab8371d3]
	I0819 04:17:47.404352    3949 logs.go:123] Gathering logs for kube-controller-manager [fcadb869ae9b] ...
	I0819 04:17:47.404357    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcadb869ae9b"
	I0819 04:17:47.416193    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:17:47.416203    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:17:47.453417    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:17:47.453431    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:17:47.489440    3949 logs.go:123] Gathering logs for storage-provisioner [f2aeab8371d3] ...
	I0819 04:17:47.489453    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2aeab8371d3"
	I0819 04:17:47.505685    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:17:47.505695    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:17:47.517653    3949 logs.go:123] Gathering logs for kube-scheduler [b19a94fd47ab] ...
	I0819 04:17:47.517665    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b19a94fd47ab"
	I0819 04:17:47.533487    3949 logs.go:123] Gathering logs for storage-provisioner [0d999e2f9c91] ...
	I0819 04:17:47.533499    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d999e2f9c91"
	I0819 04:17:47.546090    3949 logs.go:123] Gathering logs for etcd [cea274700c6b] ...
	I0819 04:17:47.546102    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cea274700c6b"
	I0819 04:17:47.564289    3949 logs.go:123] Gathering logs for coredns [086adbfeded2] ...
	I0819 04:17:47.564299    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086adbfeded2"
	I0819 04:17:47.575862    3949 logs.go:123] Gathering logs for kube-proxy [9f601f76c443] ...
	I0819 04:17:47.575875    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f601f76c443"
	I0819 04:17:47.591269    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:17:47.591282    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:17:47.613451    3949 logs.go:123] Gathering logs for kube-apiserver [82e016e3639d] ...
	I0819 04:17:47.613460    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82e016e3639d"
	I0819 04:17:47.641481    3949 logs.go:123] Gathering logs for etcd [124abd52fd44] ...
	I0819 04:17:47.641494    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124abd52fd44"
	I0819 04:17:47.656404    3949 logs.go:123] Gathering logs for kube-scheduler [6362a51486fb] ...
	I0819 04:17:47.656418    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6362a51486fb"
	I0819 04:17:47.669086    3949 logs.go:123] Gathering logs for kube-controller-manager [19fa56b6b5d8] ...
	I0819 04:17:47.669097    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fa56b6b5d8"
	I0819 04:17:47.686860    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:17:47.686869    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:17:47.691328    3949 logs.go:123] Gathering logs for kube-apiserver [e6e08462a43e] ...
	I0819 04:17:47.691338    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e08462a43e"
	I0819 04:17:50.207234    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:17:55.209438    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:17:55.209591    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:17:55.221565    3949 logs.go:276] 2 containers: [e6e08462a43e 82e016e3639d]
	I0819 04:17:55.221657    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:17:55.237258    3949 logs.go:276] 2 containers: [124abd52fd44 cea274700c6b]
	I0819 04:17:55.237333    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:17:55.248204    3949 logs.go:276] 1 containers: [086adbfeded2]
	I0819 04:17:55.248269    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:17:55.259397    3949 logs.go:276] 2 containers: [6362a51486fb b19a94fd47ab]
	I0819 04:17:55.259468    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:17:55.270949    3949 logs.go:276] 1 containers: [9f601f76c443]
	I0819 04:17:55.271013    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:17:55.281921    3949 logs.go:276] 2 containers: [19fa56b6b5d8 fcadb869ae9b]
	I0819 04:17:55.281987    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:17:55.299649    3949 logs.go:276] 0 containers: []
	W0819 04:17:55.299659    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:17:55.299725    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:17:55.310467    3949 logs.go:276] 2 containers: [0d999e2f9c91 f2aeab8371d3]
	I0819 04:17:55.310486    3949 logs.go:123] Gathering logs for kube-controller-manager [19fa56b6b5d8] ...
	I0819 04:17:55.310491    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fa56b6b5d8"
	I0819 04:17:55.328410    3949 logs.go:123] Gathering logs for kube-controller-manager [fcadb869ae9b] ...
	I0819 04:17:55.328420    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcadb869ae9b"
	I0819 04:17:55.342155    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:17:55.342166    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:17:55.378202    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:17:55.378211    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:17:55.413204    3949 logs.go:123] Gathering logs for kube-scheduler [6362a51486fb] ...
	I0819 04:17:55.413216    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6362a51486fb"
	I0819 04:17:55.425726    3949 logs.go:123] Gathering logs for kube-proxy [9f601f76c443] ...
	I0819 04:17:55.425738    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f601f76c443"
	I0819 04:17:55.439269    3949 logs.go:123] Gathering logs for coredns [086adbfeded2] ...
	I0819 04:17:55.439283    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086adbfeded2"
	I0819 04:17:55.452988    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:17:55.453005    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:17:55.465138    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:17:55.465153    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:17:55.470435    3949 logs.go:123] Gathering logs for kube-apiserver [82e016e3639d] ...
	I0819 04:17:55.470442    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82e016e3639d"
	I0819 04:17:55.490174    3949 logs.go:123] Gathering logs for etcd [124abd52fd44] ...
	I0819 04:17:55.490185    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124abd52fd44"
	I0819 04:17:55.504080    3949 logs.go:123] Gathering logs for etcd [cea274700c6b] ...
	I0819 04:17:55.504095    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cea274700c6b"
	I0819 04:17:55.522523    3949 logs.go:123] Gathering logs for kube-apiserver [e6e08462a43e] ...
	I0819 04:17:55.522534    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e08462a43e"
	I0819 04:17:55.536838    3949 logs.go:123] Gathering logs for storage-provisioner [f2aeab8371d3] ...
	I0819 04:17:55.536848    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2aeab8371d3"
	I0819 04:17:55.548045    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:17:55.548056    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:17:55.571235    3949 logs.go:123] Gathering logs for kube-scheduler [b19a94fd47ab] ...
	I0819 04:17:55.571258    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b19a94fd47ab"
	I0819 04:17:55.589076    3949 logs.go:123] Gathering logs for storage-provisioner [0d999e2f9c91] ...
	I0819 04:17:55.589087    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d999e2f9c91"
	I0819 04:17:58.102655    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:18:03.105373    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:18:03.105507    3949 kubeadm.go:597] duration metric: took 4m4.063838917s to restartPrimaryControlPlane
	W0819 04:18:03.105624    3949 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 04:18:03.105683    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0819 04:18:04.106308    3949 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.000623875s)
	I0819 04:18:04.106379    3949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 04:18:04.111546    3949 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 04:18:04.114480    3949 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 04:18:04.117422    3949 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 04:18:04.117428    3949 kubeadm.go:157] found existing configuration files:
	
	I0819 04:18:04.117453    3949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50264 /etc/kubernetes/admin.conf
	I0819 04:18:04.119944    3949 kubeadm.go:163] "https://control-plane.minikube.internal:50264" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50264 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 04:18:04.119969    3949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 04:18:04.122721    3949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50264 /etc/kubernetes/kubelet.conf
	I0819 04:18:04.125970    3949 kubeadm.go:163] "https://control-plane.minikube.internal:50264" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50264 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 04:18:04.125998    3949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 04:18:04.129031    3949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50264 /etc/kubernetes/controller-manager.conf
	I0819 04:18:04.131597    3949 kubeadm.go:163] "https://control-plane.minikube.internal:50264" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50264 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 04:18:04.131622    3949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 04:18:04.134586    3949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50264 /etc/kubernetes/scheduler.conf
	I0819 04:18:04.137788    3949 kubeadm.go:163] "https://control-plane.minikube.internal:50264" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50264 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 04:18:04.137817    3949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 04:18:04.140976    3949 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 04:18:04.159295    3949 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0819 04:18:04.159395    3949 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 04:18:04.208932    3949 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 04:18:04.208986    3949 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 04:18:04.209035    3949 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 04:18:04.263218    3949 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 04:18:04.267168    3949 out.go:235]   - Generating certificates and keys ...
	I0819 04:18:04.267203    3949 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 04:18:04.267234    3949 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 04:18:04.267277    3949 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 04:18:04.267311    3949 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 04:18:04.267344    3949 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 04:18:04.267375    3949 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 04:18:04.267406    3949 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 04:18:04.267434    3949 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 04:18:04.267505    3949 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 04:18:04.267572    3949 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 04:18:04.267593    3949 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 04:18:04.267629    3949 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 04:18:04.352173    3949 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 04:18:04.532900    3949 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 04:18:04.617771    3949 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 04:18:04.789926    3949 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 04:18:04.818290    3949 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 04:18:04.818641    3949 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 04:18:04.818783    3949 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 04:18:04.905432    3949 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 04:18:04.909314    3949 out.go:235]   - Booting up control plane ...
	I0819 04:18:04.909366    3949 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 04:18:04.909404    3949 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 04:18:04.914810    3949 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 04:18:04.915056    3949 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 04:18:04.915814    3949 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 04:18:08.917337    3949 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.001264 seconds
	I0819 04:18:08.917570    3949 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 04:18:08.921499    3949 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 04:18:09.434875    3949 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 04:18:09.435131    3949 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-079000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 04:18:09.941123    3949 kubeadm.go:310] [bootstrap-token] Using token: ronyev.g7zknjg3pm347ihg
	I0819 04:18:09.945112    3949 out.go:235]   - Configuring RBAC rules ...
	I0819 04:18:09.945177    3949 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 04:18:09.945231    3949 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 04:18:09.949042    3949 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 04:18:09.949968    3949 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 04:18:09.951111    3949 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 04:18:09.952361    3949 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 04:18:09.955924    3949 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 04:18:10.133705    3949 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 04:18:10.346235    3949 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 04:18:10.346247    3949 kubeadm.go:310] 
	I0819 04:18:10.346288    3949 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 04:18:10.346291    3949 kubeadm.go:310] 
	I0819 04:18:10.346326    3949 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 04:18:10.346330    3949 kubeadm.go:310] 
	I0819 04:18:10.346341    3949 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 04:18:10.346368    3949 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 04:18:10.346406    3949 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 04:18:10.346413    3949 kubeadm.go:310] 
	I0819 04:18:10.346448    3949 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 04:18:10.346452    3949 kubeadm.go:310] 
	I0819 04:18:10.346471    3949 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 04:18:10.346474    3949 kubeadm.go:310] 
	I0819 04:18:10.346498    3949 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 04:18:10.346542    3949 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 04:18:10.346587    3949 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 04:18:10.346594    3949 kubeadm.go:310] 
	I0819 04:18:10.346630    3949 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 04:18:10.346672    3949 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 04:18:10.346678    3949 kubeadm.go:310] 
	I0819 04:18:10.346729    3949 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ronyev.g7zknjg3pm347ihg \
	I0819 04:18:10.346779    3949 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:200cf9aaf4d8090b061170c9280858f68184aa10356c82792dd3b43229196e5e \
	I0819 04:18:10.346789    3949 kubeadm.go:310] 	--control-plane 
	I0819 04:18:10.346791    3949 kubeadm.go:310] 
	I0819 04:18:10.346843    3949 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 04:18:10.346851    3949 kubeadm.go:310] 
	I0819 04:18:10.346887    3949 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ronyev.g7zknjg3pm347ihg \
	I0819 04:18:10.346933    3949 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:200cf9aaf4d8090b061170c9280858f68184aa10356c82792dd3b43229196e5e 
	I0819 04:18:10.346995    3949 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 04:18:10.347003    3949 cni.go:84] Creating CNI manager for ""
	I0819 04:18:10.347010    3949 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:18:10.355178    3949 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 04:18:10.358381    3949 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 04:18:10.361485    3949 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 04:18:10.368116    3949 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 04:18:10.368170    3949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 04:18:10.368170    3949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-079000 minikube.k8s.io/updated_at=2024_08_19T04_18_10_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=7871dd89d2a8218fd3bbcc542b116f963c0d9934 minikube.k8s.io/name=running-upgrade-079000 minikube.k8s.io/primary=true
	I0819 04:18:10.408691    3949 ops.go:34] apiserver oom_adj: -16
	I0819 04:18:10.408782    3949 kubeadm.go:1113] duration metric: took 40.662916ms to wait for elevateKubeSystemPrivileges
	I0819 04:18:10.408795    3949 kubeadm.go:394] duration metric: took 4m11.380638375s to StartCluster
	I0819 04:18:10.408804    3949 settings.go:142] acquiring lock: {Name:mkadddaa5ec690138051e9a9334213fba69e0867 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:18:10.408888    3949 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19476-967/kubeconfig
	I0819 04:18:10.409276    3949 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19476-967/kubeconfig: {Name:mkcc8b27cbda2ef567c4911aa335c1e1951a7d2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:18:10.409480    3949 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:18:10.409532    3949 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 04:18:10.409573    3949 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-079000"
	I0819 04:18:10.409585    3949 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-079000"
	W0819 04:18:10.409591    3949 addons.go:243] addon storage-provisioner should already be in state true
	I0819 04:18:10.409601    3949 host.go:66] Checking if "running-upgrade-079000" exists ...
	I0819 04:18:10.409592    3949 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-079000"
	I0819 04:18:10.409630    3949 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-079000"
	I0819 04:18:10.409735    3949 config.go:182] Loaded profile config "running-upgrade-079000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 04:18:10.409862    3949 retry.go:31] will retry after 1.413875858s: connect: dial unix /Users/jenkins/minikube-integration/19476-967/.minikube/machines/running-upgrade-079000/monitor: connect: connection refused
	I0819 04:18:10.410654    3949 kapi.go:59] client config for running-upgrade-079000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19476-967/.minikube/profiles/running-upgrade-079000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19476-967/.minikube/profiles/running-upgrade-079000/client.key", CAFile:"/Users/jenkins/minikube-integration/19476-967/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106391610), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 04:18:10.410775    3949 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-079000"
	W0819 04:18:10.410779    3949 addons.go:243] addon default-storageclass should already be in state true
	I0819 04:18:10.410785    3949 host.go:66] Checking if "running-upgrade-079000" exists ...
	I0819 04:18:10.411295    3949 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 04:18:10.411299    3949 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 04:18:10.411304    3949 sshutil.go:53] new ssh client: &{IP:localhost Port:50232 SSHKeyPath:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/running-upgrade-079000/id_rsa Username:docker}
	I0819 04:18:10.412424    3949 out.go:177] * Verifying Kubernetes components...
	I0819 04:18:10.417189    3949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 04:18:10.512246    3949 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 04:18:10.518389    3949 api_server.go:52] waiting for apiserver process to appear ...
	I0819 04:18:10.518438    3949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 04:18:10.523271    3949 api_server.go:72] duration metric: took 113.7805ms to wait for apiserver process to appear ...
	I0819 04:18:10.523281    3949 api_server.go:88] waiting for apiserver healthz status ...
	I0819 04:18:10.523291    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:18:10.598191    3949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 04:18:10.898418    3949 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0819 04:18:10.898431    3949 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0819 04:18:11.831721    3949 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 04:18:11.836798    3949 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 04:18:11.836808    3949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 04:18:11.836821    3949 sshutil.go:53] new ssh client: &{IP:localhost Port:50232 SSHKeyPath:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/running-upgrade-079000/id_rsa Username:docker}
	I0819 04:18:11.868094    3949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 04:18:15.524420    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:18:15.524456    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:18:20.525254    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:18:20.525316    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:18:25.525526    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:18:25.525548    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:18:30.525756    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:18:30.525789    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:18:35.526196    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:18:35.526217    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:18:40.526654    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:18:40.526705    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0819 04:18:40.900453    3949 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0819 04:18:40.904773    3949 out.go:177] * Enabled addons: storage-provisioner
	I0819 04:18:40.913620    3949 addons.go:510] duration metric: took 30.504467333s for enable addons: enabled=[storage-provisioner]
	I0819 04:18:45.527456    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:18:45.527494    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:18:50.528419    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:18:50.528516    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:18:55.530227    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:18:55.530251    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:19:00.531756    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:19:00.531778    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:19:05.533844    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:19:05.533883    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:19:10.536066    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:19:10.536193    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:19:10.548012    3949 logs.go:276] 1 containers: [a0805f9c4c2c]
	I0819 04:19:10.548087    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:19:10.558455    3949 logs.go:276] 1 containers: [8b26c07e9e7f]
	I0819 04:19:10.558523    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:19:10.569149    3949 logs.go:276] 2 containers: [161fcc2cac7e 781c45adfd16]
	I0819 04:19:10.569216    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:19:10.579007    3949 logs.go:276] 1 containers: [ae35457314f6]
	I0819 04:19:10.579095    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:19:10.589855    3949 logs.go:276] 1 containers: [6268fe998982]
	I0819 04:19:10.589941    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:19:10.600276    3949 logs.go:276] 1 containers: [0e2a041f6a1c]
	I0819 04:19:10.600344    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:19:10.611623    3949 logs.go:276] 0 containers: []
	W0819 04:19:10.611642    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:19:10.611715    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:19:10.622088    3949 logs.go:276] 1 containers: [ce9e3ca02329]
	I0819 04:19:10.622104    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:19:10.622110    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:19:10.626770    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:19:10.626780    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:19:10.664019    3949 logs.go:123] Gathering logs for etcd [8b26c07e9e7f] ...
	I0819 04:19:10.664030    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b26c07e9e7f"
	I0819 04:19:10.678456    3949 logs.go:123] Gathering logs for coredns [161fcc2cac7e] ...
	I0819 04:19:10.678472    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161fcc2cac7e"
	I0819 04:19:10.690299    3949 logs.go:123] Gathering logs for storage-provisioner [ce9e3ca02329] ...
	I0819 04:19:10.690312    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9e3ca02329"
	I0819 04:19:10.702833    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:19:10.702846    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:19:10.728453    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:19:10.728465    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:19:10.740182    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:19:10.740194    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:19:10.777735    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:19:10.777834    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:19:10.778370    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:19:10.778457    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	I0819 04:19:10.779720    3949 logs.go:123] Gathering logs for kube-apiserver [a0805f9c4c2c] ...
	I0819 04:19:10.779731    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0805f9c4c2c"
	I0819 04:19:10.798013    3949 logs.go:123] Gathering logs for coredns [781c45adfd16] ...
	I0819 04:19:10.798025    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 781c45adfd16"
	I0819 04:19:10.810245    3949 logs.go:123] Gathering logs for kube-scheduler [ae35457314f6] ...
	I0819 04:19:10.810259    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae35457314f6"
	I0819 04:19:10.824921    3949 logs.go:123] Gathering logs for kube-proxy [6268fe998982] ...
	I0819 04:19:10.824932    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6268fe998982"
	I0819 04:19:10.836725    3949 logs.go:123] Gathering logs for kube-controller-manager [0e2a041f6a1c] ...
	I0819 04:19:10.836736    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2a041f6a1c"
	I0819 04:19:10.854520    3949 out.go:358] Setting ErrFile to fd 2...
	I0819 04:19:10.854529    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:19:10.854571    3949 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0819 04:19:10.854576    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	  Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:19:10.854582    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	  Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:19:10.854587    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	  Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:19:10.854591    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	  Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	I0819 04:19:10.854594    3949 out.go:358] Setting ErrFile to fd 2...
	I0819 04:19:10.854597    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:19:20.858620    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:19:25.860846    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:19:25.861031    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:19:25.873047    3949 logs.go:276] 1 containers: [a0805f9c4c2c]
	I0819 04:19:25.873129    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:19:25.883742    3949 logs.go:276] 1 containers: [8b26c07e9e7f]
	I0819 04:19:25.883826    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:19:25.894415    3949 logs.go:276] 2 containers: [161fcc2cac7e 781c45adfd16]
	I0819 04:19:25.894480    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:19:25.904919    3949 logs.go:276] 1 containers: [ae35457314f6]
	I0819 04:19:25.904991    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:19:25.915191    3949 logs.go:276] 1 containers: [6268fe998982]
	I0819 04:19:25.915265    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:19:25.925766    3949 logs.go:276] 1 containers: [0e2a041f6a1c]
	I0819 04:19:25.925839    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:19:25.936115    3949 logs.go:276] 0 containers: []
	W0819 04:19:25.936126    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:19:25.936189    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:19:25.946194    3949 logs.go:276] 1 containers: [ce9e3ca02329]
	I0819 04:19:25.946213    3949 logs.go:123] Gathering logs for storage-provisioner [ce9e3ca02329] ...
	I0819 04:19:25.946218    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9e3ca02329"
	I0819 04:19:25.957821    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:19:25.957833    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:19:25.981572    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:19:25.981580    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:19:26.016301    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:19:26.016395    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:19:26.016943    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:19:26.017031    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	I0819 04:19:26.018301    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:19:26.018308    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:19:26.022579    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:19:26.022587    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:19:26.060353    3949 logs.go:123] Gathering logs for kube-apiserver [a0805f9c4c2c] ...
	I0819 04:19:26.060362    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0805f9c4c2c"
	I0819 04:19:26.076346    3949 logs.go:123] Gathering logs for etcd [8b26c07e9e7f] ...
	I0819 04:19:26.076357    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b26c07e9e7f"
	I0819 04:19:26.092809    3949 logs.go:123] Gathering logs for kube-proxy [6268fe998982] ...
	I0819 04:19:26.092820    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6268fe998982"
	I0819 04:19:26.104789    3949 logs.go:123] Gathering logs for coredns [161fcc2cac7e] ...
	I0819 04:19:26.104800    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161fcc2cac7e"
	I0819 04:19:26.116861    3949 logs.go:123] Gathering logs for coredns [781c45adfd16] ...
	I0819 04:19:26.116874    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 781c45adfd16"
	I0819 04:19:26.128675    3949 logs.go:123] Gathering logs for kube-scheduler [ae35457314f6] ...
	I0819 04:19:26.128685    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae35457314f6"
	I0819 04:19:26.148439    3949 logs.go:123] Gathering logs for kube-controller-manager [0e2a041f6a1c] ...
	I0819 04:19:26.148450    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2a041f6a1c"
	I0819 04:19:26.166304    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:19:26.166314    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:19:26.178100    3949 out.go:358] Setting ErrFile to fd 2...
	I0819 04:19:26.178114    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:19:26.178141    3949 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0819 04:19:26.178145    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	  Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:19:26.178148    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	  Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:19:26.178152    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	  Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:19:26.178155    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	  Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	I0819 04:19:26.178163    3949 out.go:358] Setting ErrFile to fd 2...
	I0819 04:19:26.178179    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:19:36.182233    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:19:41.184454    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:19:41.184674    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:19:41.200355    3949 logs.go:276] 1 containers: [a0805f9c4c2c]
	I0819 04:19:41.200444    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:19:41.212554    3949 logs.go:276] 1 containers: [8b26c07e9e7f]
	I0819 04:19:41.212622    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:19:41.223661    3949 logs.go:276] 2 containers: [161fcc2cac7e 781c45adfd16]
	I0819 04:19:41.223736    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:19:41.234064    3949 logs.go:276] 1 containers: [ae35457314f6]
	I0819 04:19:41.234132    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:19:41.244851    3949 logs.go:276] 1 containers: [6268fe998982]
	I0819 04:19:41.244920    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:19:41.258930    3949 logs.go:276] 1 containers: [0e2a041f6a1c]
	I0819 04:19:41.259007    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:19:41.269162    3949 logs.go:276] 0 containers: []
	W0819 04:19:41.269173    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:19:41.269230    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:19:41.280026    3949 logs.go:276] 1 containers: [ce9e3ca02329]
	I0819 04:19:41.280044    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:19:41.280051    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:19:41.316978    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:19:41.317072    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:19:41.317618    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:19:41.317716    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	I0819 04:19:41.318998    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:19:41.319007    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:19:41.323364    3949 logs.go:123] Gathering logs for etcd [8b26c07e9e7f] ...
	I0819 04:19:41.323373    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b26c07e9e7f"
	I0819 04:19:41.337030    3949 logs.go:123] Gathering logs for storage-provisioner [ce9e3ca02329] ...
	I0819 04:19:41.337041    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9e3ca02329"
	I0819 04:19:41.348701    3949 logs.go:123] Gathering logs for kube-proxy [6268fe998982] ...
	I0819 04:19:41.348713    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6268fe998982"
	I0819 04:19:41.359986    3949 logs.go:123] Gathering logs for kube-controller-manager [0e2a041f6a1c] ...
	I0819 04:19:41.359999    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2a041f6a1c"
	I0819 04:19:41.377571    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:19:41.377584    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:19:41.402816    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:19:41.402826    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:19:41.437084    3949 logs.go:123] Gathering logs for kube-apiserver [a0805f9c4c2c] ...
	I0819 04:19:41.437098    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0805f9c4c2c"
	I0819 04:19:41.451548    3949 logs.go:123] Gathering logs for coredns [161fcc2cac7e] ...
	I0819 04:19:41.451559    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161fcc2cac7e"
	I0819 04:19:41.463084    3949 logs.go:123] Gathering logs for coredns [781c45adfd16] ...
	I0819 04:19:41.463097    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 781c45adfd16"
	I0819 04:19:41.475056    3949 logs.go:123] Gathering logs for kube-scheduler [ae35457314f6] ...
	I0819 04:19:41.475068    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae35457314f6"
	I0819 04:19:41.490013    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:19:41.490025    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:19:41.502676    3949 out.go:358] Setting ErrFile to fd 2...
	I0819 04:19:41.502685    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:19:41.502711    3949 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0819 04:19:41.502718    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	  Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:19:41.502725    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	  Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:19:41.502729    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	  Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:19:41.502732    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	  Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	I0819 04:19:41.502738    3949 out.go:358] Setting ErrFile to fd 2...
	I0819 04:19:41.502740    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:19:51.505990    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:19:56.508498    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:19:56.508695    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:19:56.525774    3949 logs.go:276] 1 containers: [a0805f9c4c2c]
	I0819 04:19:56.525870    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:19:56.538925    3949 logs.go:276] 1 containers: [8b26c07e9e7f]
	I0819 04:19:56.539007    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:19:56.550297    3949 logs.go:276] 2 containers: [161fcc2cac7e 781c45adfd16]
	I0819 04:19:56.550361    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:19:56.563773    3949 logs.go:276] 1 containers: [ae35457314f6]
	I0819 04:19:56.563843    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:19:56.574101    3949 logs.go:276] 1 containers: [6268fe998982]
	I0819 04:19:56.574167    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:19:56.585599    3949 logs.go:276] 1 containers: [0e2a041f6a1c]
	I0819 04:19:56.585669    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:19:56.596486    3949 logs.go:276] 0 containers: []
	W0819 04:19:56.596495    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:19:56.596551    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:19:56.607134    3949 logs.go:276] 1 containers: [ce9e3ca02329]
	I0819 04:19:56.607150    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:19:56.607155    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:19:56.633913    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:19:56.633926    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:19:56.645937    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:19:56.645948    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:19:56.683527    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:19:56.683623    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:19:56.684172    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:19:56.684261    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	I0819 04:19:56.685529    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:19:56.685538    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:19:56.690286    3949 logs.go:123] Gathering logs for kube-scheduler [ae35457314f6] ...
	I0819 04:19:56.690295    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae35457314f6"
	I0819 04:19:56.704798    3949 logs.go:123] Gathering logs for kube-proxy [6268fe998982] ...
	I0819 04:19:56.704809    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6268fe998982"
	I0819 04:19:56.724560    3949 logs.go:123] Gathering logs for kube-controller-manager [0e2a041f6a1c] ...
	I0819 04:19:56.724574    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2a041f6a1c"
	I0819 04:19:56.743582    3949 logs.go:123] Gathering logs for storage-provisioner [ce9e3ca02329] ...
	I0819 04:19:56.743592    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9e3ca02329"
	I0819 04:19:56.755182    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:19:56.755195    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:19:56.790498    3949 logs.go:123] Gathering logs for kube-apiserver [a0805f9c4c2c] ...
	I0819 04:19:56.790508    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0805f9c4c2c"
	I0819 04:19:56.806921    3949 logs.go:123] Gathering logs for etcd [8b26c07e9e7f] ...
	I0819 04:19:56.806932    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b26c07e9e7f"
	I0819 04:19:56.820806    3949 logs.go:123] Gathering logs for coredns [161fcc2cac7e] ...
	I0819 04:19:56.820816    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161fcc2cac7e"
	I0819 04:19:56.833802    3949 logs.go:123] Gathering logs for coredns [781c45adfd16] ...
	I0819 04:19:56.833813    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 781c45adfd16"
	I0819 04:19:56.845891    3949 out.go:358] Setting ErrFile to fd 2...
	I0819 04:19:56.845901    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:19:56.845929    3949 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0819 04:19:56.845933    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	  Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:19:56.845937    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	  Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:19:56.845939    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	  Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:19:56.845949    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	  Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	I0819 04:19:56.845952    3949 out.go:358] Setting ErrFile to fd 2...
	I0819 04:19:56.845955    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:20:06.848857    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:20:11.851123    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:20:11.851366    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:20:11.870935    3949 logs.go:276] 1 containers: [a0805f9c4c2c]
	I0819 04:20:11.871043    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:20:11.887429    3949 logs.go:276] 1 containers: [8b26c07e9e7f]
	I0819 04:20:11.887506    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:20:11.898947    3949 logs.go:276] 2 containers: [161fcc2cac7e 781c45adfd16]
	I0819 04:20:11.899019    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:20:11.909187    3949 logs.go:276] 1 containers: [ae35457314f6]
	I0819 04:20:11.909265    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:20:11.919943    3949 logs.go:276] 1 containers: [6268fe998982]
	I0819 04:20:11.920014    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:20:11.930738    3949 logs.go:276] 1 containers: [0e2a041f6a1c]
	I0819 04:20:11.930816    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:20:11.940986    3949 logs.go:276] 0 containers: []
	W0819 04:20:11.941002    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:20:11.941066    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:20:11.953392    3949 logs.go:276] 1 containers: [ce9e3ca02329]
	I0819 04:20:11.953408    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:20:11.953414    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:20:11.970975    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:20:11.970986    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:20:11.975273    3949 logs.go:123] Gathering logs for etcd [8b26c07e9e7f] ...
	I0819 04:20:11.975282    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b26c07e9e7f"
	I0819 04:20:11.989448    3949 logs.go:123] Gathering logs for kube-controller-manager [0e2a041f6a1c] ...
	I0819 04:20:11.989461    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2a041f6a1c"
	I0819 04:20:12.009679    3949 logs.go:123] Gathering logs for storage-provisioner [ce9e3ca02329] ...
	I0819 04:20:12.009690    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9e3ca02329"
	I0819 04:20:12.020911    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:20:12.020924    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:20:12.045538    3949 logs.go:123] Gathering logs for kube-scheduler [ae35457314f6] ...
	I0819 04:20:12.045549    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae35457314f6"
	I0819 04:20:12.060245    3949 logs.go:123] Gathering logs for kube-proxy [6268fe998982] ...
	I0819 04:20:12.060257    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6268fe998982"
	I0819 04:20:12.072097    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:20:12.072106    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:20:12.109810    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:20:12.109906    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:20:12.110443    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:20:12.110531    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	I0819 04:20:12.111820    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:20:12.111826    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:20:12.148577    3949 logs.go:123] Gathering logs for kube-apiserver [a0805f9c4c2c] ...
	I0819 04:20:12.148591    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0805f9c4c2c"
	I0819 04:20:12.162944    3949 logs.go:123] Gathering logs for coredns [161fcc2cac7e] ...
	I0819 04:20:12.162954    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161fcc2cac7e"
	I0819 04:20:12.174509    3949 logs.go:123] Gathering logs for coredns [781c45adfd16] ...
	I0819 04:20:12.174519    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 781c45adfd16"
	I0819 04:20:12.187470    3949 out.go:358] Setting ErrFile to fd 2...
	I0819 04:20:12.187480    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:20:12.187507    3949 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0819 04:20:12.187512    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	  Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:20:12.187516    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	  Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:20:12.187521    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	  Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:20:12.187524    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	  Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	I0819 04:20:12.187527    3949 out.go:358] Setting ErrFile to fd 2...
	I0819 04:20:12.187540    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:20:22.191546    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:20:27.193807    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:20:27.194172    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:20:27.225197    3949 logs.go:276] 1 containers: [a0805f9c4c2c]
	I0819 04:20:27.225335    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:20:27.244022    3949 logs.go:276] 1 containers: [8b26c07e9e7f]
	I0819 04:20:27.244116    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:20:27.260363    3949 logs.go:276] 4 containers: [b8387e4e1e6c 76bba5139c4a 161fcc2cac7e 781c45adfd16]
	I0819 04:20:27.260428    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:20:27.271792    3949 logs.go:276] 1 containers: [ae35457314f6]
	I0819 04:20:27.271866    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:20:27.287132    3949 logs.go:276] 1 containers: [6268fe998982]
	I0819 04:20:27.287193    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:20:27.297405    3949 logs.go:276] 1 containers: [0e2a041f6a1c]
	I0819 04:20:27.297467    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:20:27.307712    3949 logs.go:276] 0 containers: []
	W0819 04:20:27.307721    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:20:27.307777    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:20:27.318581    3949 logs.go:276] 1 containers: [ce9e3ca02329]
	I0819 04:20:27.318597    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:20:27.318603    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:20:27.357433    3949 logs.go:123] Gathering logs for kube-apiserver [a0805f9c4c2c] ...
	I0819 04:20:27.357446    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0805f9c4c2c"
	I0819 04:20:27.371706    3949 logs.go:123] Gathering logs for coredns [76bba5139c4a] ...
	I0819 04:20:27.371719    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bba5139c4a"
	I0819 04:20:27.383516    3949 logs.go:123] Gathering logs for storage-provisioner [ce9e3ca02329] ...
	I0819 04:20:27.383528    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9e3ca02329"
	I0819 04:20:27.395366    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:20:27.395381    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:20:27.407471    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:20:27.407481    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:20:27.444728    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:20:27.444821    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:20:27.445337    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:20:27.445425    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	I0819 04:20:27.446717    3949 logs.go:123] Gathering logs for coredns [b8387e4e1e6c] ...
	I0819 04:20:27.446722    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8387e4e1e6c"
	I0819 04:20:27.463828    3949 logs.go:123] Gathering logs for coredns [161fcc2cac7e] ...
	I0819 04:20:27.463839    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161fcc2cac7e"
	I0819 04:20:27.475791    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:20:27.475803    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:20:27.500034    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:20:27.500042    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:20:27.504692    3949 logs.go:123] Gathering logs for etcd [8b26c07e9e7f] ...
	I0819 04:20:27.504701    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b26c07e9e7f"
	I0819 04:20:27.521580    3949 logs.go:123] Gathering logs for coredns [781c45adfd16] ...
	I0819 04:20:27.521592    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 781c45adfd16"
	I0819 04:20:27.533538    3949 logs.go:123] Gathering logs for kube-scheduler [ae35457314f6] ...
	I0819 04:20:27.533552    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae35457314f6"
	I0819 04:20:27.547937    3949 logs.go:123] Gathering logs for kube-proxy [6268fe998982] ...
	I0819 04:20:27.547950    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6268fe998982"
	I0819 04:20:27.559953    3949 logs.go:123] Gathering logs for kube-controller-manager [0e2a041f6a1c] ...
	I0819 04:20:27.559964    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2a041f6a1c"
	I0819 04:20:27.579596    3949 out.go:358] Setting ErrFile to fd 2...
	I0819 04:20:27.579608    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:20:27.579632    3949 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0819 04:20:27.579638    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	  Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:20:27.579642    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	  Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:20:27.579646    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	  Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:20:27.579649    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	  Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	I0819 04:20:27.579652    3949 out.go:358] Setting ErrFile to fd 2...
	I0819 04:20:27.579654    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:20:37.582722    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:20:42.584141    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:20:42.584374    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:20:42.604587    3949 logs.go:276] 1 containers: [a0805f9c4c2c]
	I0819 04:20:42.604673    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:20:42.618782    3949 logs.go:276] 1 containers: [8b26c07e9e7f]
	I0819 04:20:42.618863    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:20:42.630925    3949 logs.go:276] 4 containers: [b8387e4e1e6c 76bba5139c4a 161fcc2cac7e 781c45adfd16]
	I0819 04:20:42.630991    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:20:42.642266    3949 logs.go:276] 1 containers: [ae35457314f6]
	I0819 04:20:42.642340    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:20:42.653920    3949 logs.go:276] 1 containers: [6268fe998982]
	I0819 04:20:42.653989    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:20:42.664633    3949 logs.go:276] 1 containers: [0e2a041f6a1c]
	I0819 04:20:42.664698    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:20:42.675183    3949 logs.go:276] 0 containers: []
	W0819 04:20:42.675195    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:20:42.675260    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:20:42.686358    3949 logs.go:276] 1 containers: [ce9e3ca02329]
	I0819 04:20:42.686374    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:20:42.686379    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:20:42.691663    3949 logs.go:123] Gathering logs for kube-scheduler [ae35457314f6] ...
	I0819 04:20:42.691673    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae35457314f6"
	I0819 04:20:42.709192    3949 logs.go:123] Gathering logs for etcd [8b26c07e9e7f] ...
	I0819 04:20:42.709204    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b26c07e9e7f"
	I0819 04:20:42.722838    3949 logs.go:123] Gathering logs for coredns [76bba5139c4a] ...
	I0819 04:20:42.722850    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bba5139c4a"
	I0819 04:20:42.734155    3949 logs.go:123] Gathering logs for kube-proxy [6268fe998982] ...
	I0819 04:20:42.734168    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6268fe998982"
	I0819 04:20:42.745918    3949 logs.go:123] Gathering logs for storage-provisioner [ce9e3ca02329] ...
	I0819 04:20:42.745932    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9e3ca02329"
	I0819 04:20:42.757680    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:20:42.757693    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:20:42.793336    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:20:42.793436    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:20:42.793984    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:20:42.794076    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	I0819 04:20:42.795393    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:20:42.795399    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:20:42.831182    3949 logs.go:123] Gathering logs for kube-apiserver [a0805f9c4c2c] ...
	I0819 04:20:42.831192    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0805f9c4c2c"
	I0819 04:20:42.845607    3949 logs.go:123] Gathering logs for kube-controller-manager [0e2a041f6a1c] ...
	I0819 04:20:42.845618    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2a041f6a1c"
	I0819 04:20:42.862702    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:20:42.862721    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:20:42.890311    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:20:42.890327    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:20:42.903726    3949 logs.go:123] Gathering logs for coredns [b8387e4e1e6c] ...
	I0819 04:20:42.903738    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8387e4e1e6c"
	I0819 04:20:42.921252    3949 logs.go:123] Gathering logs for coredns [161fcc2cac7e] ...
	I0819 04:20:42.921265    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161fcc2cac7e"
	I0819 04:20:42.933243    3949 logs.go:123] Gathering logs for coredns [781c45adfd16] ...
	I0819 04:20:42.933251    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 781c45adfd16"
	I0819 04:20:42.944836    3949 out.go:358] Setting ErrFile to fd 2...
	I0819 04:20:42.944847    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:20:42.944873    3949 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0819 04:20:42.944878    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	  Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:20:42.944881    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	  Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:20:42.944887    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	  Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:20:42.944901    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	  Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	I0819 04:20:42.944905    3949 out.go:358] Setting ErrFile to fd 2...
	I0819 04:20:42.944908    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:20:52.948920    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:20:57.951331    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:20:57.951785    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:20:57.989059    3949 logs.go:276] 1 containers: [a0805f9c4c2c]
	I0819 04:20:57.989246    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:20:58.009347    3949 logs.go:276] 1 containers: [8b26c07e9e7f]
	I0819 04:20:58.009445    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:20:58.024320    3949 logs.go:276] 4 containers: [b8387e4e1e6c 76bba5139c4a 161fcc2cac7e 781c45adfd16]
	I0819 04:20:58.024403    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:20:58.038621    3949 logs.go:276] 1 containers: [ae35457314f6]
	I0819 04:20:58.038686    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:20:58.050065    3949 logs.go:276] 1 containers: [6268fe998982]
	I0819 04:20:58.050133    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:20:58.061404    3949 logs.go:276] 1 containers: [0e2a041f6a1c]
	I0819 04:20:58.061477    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:20:58.072085    3949 logs.go:276] 0 containers: []
	W0819 04:20:58.072096    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:20:58.072157    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:20:58.083011    3949 logs.go:276] 1 containers: [ce9e3ca02329]
	I0819 04:20:58.083028    3949 logs.go:123] Gathering logs for coredns [b8387e4e1e6c] ...
	I0819 04:20:58.083033    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8387e4e1e6c"
	I0819 04:20:58.104312    3949 logs.go:123] Gathering logs for coredns [781c45adfd16] ...
	I0819 04:20:58.104326    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 781c45adfd16"
	I0819 04:20:58.117156    3949 logs.go:123] Gathering logs for kube-scheduler [ae35457314f6] ...
	I0819 04:20:58.117166    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae35457314f6"
	I0819 04:20:58.132471    3949 logs.go:123] Gathering logs for kube-proxy [6268fe998982] ...
	I0819 04:20:58.132480    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6268fe998982"
	I0819 04:20:58.144605    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:20:58.144614    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:20:58.169871    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:20:58.169881    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:20:58.184248    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:20:58.184260    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:20:58.224610    3949 logs.go:123] Gathering logs for coredns [76bba5139c4a] ...
	I0819 04:20:58.224625    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bba5139c4a"
	I0819 04:20:58.236498    3949 logs.go:123] Gathering logs for storage-provisioner [ce9e3ca02329] ...
	I0819 04:20:58.236506    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9e3ca02329"
	I0819 04:20:58.252258    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:20:58.252280    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:20:58.290635    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:20:58.290732    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:20:58.291282    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:20:58.291369    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	I0819 04:20:58.292618    3949 logs.go:123] Gathering logs for kube-apiserver [a0805f9c4c2c] ...
	I0819 04:20:58.292623    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0805f9c4c2c"
	I0819 04:20:58.306701    3949 logs.go:123] Gathering logs for etcd [8b26c07e9e7f] ...
	I0819 04:20:58.306714    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b26c07e9e7f"
	I0819 04:20:58.327678    3949 logs.go:123] Gathering logs for coredns [161fcc2cac7e] ...
	I0819 04:20:58.327692    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161fcc2cac7e"
	I0819 04:20:58.339527    3949 logs.go:123] Gathering logs for kube-controller-manager [0e2a041f6a1c] ...
	I0819 04:20:58.339538    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2a041f6a1c"
	I0819 04:20:58.357566    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:20:58.357576    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:20:58.362729    3949 out.go:358] Setting ErrFile to fd 2...
	I0819 04:20:58.362737    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:20:58.362763    3949 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0819 04:20:58.362768    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	  Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:20:58.362771    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	  Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:20:58.362774    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	  Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:20:58.362777    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	  Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	I0819 04:20:58.362780    3949 out.go:358] Setting ErrFile to fd 2...
	I0819 04:20:58.362783    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:21:08.366786    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:21:13.368715    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:21:13.368938    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:21:13.404327    3949 logs.go:276] 1 containers: [a0805f9c4c2c]
	I0819 04:21:13.404412    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:21:13.416509    3949 logs.go:276] 1 containers: [8b26c07e9e7f]
	I0819 04:21:13.416581    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:21:13.427027    3949 logs.go:276] 4 containers: [b8387e4e1e6c 76bba5139c4a 161fcc2cac7e 781c45adfd16]
	I0819 04:21:13.427101    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:21:13.437633    3949 logs.go:276] 1 containers: [ae35457314f6]
	I0819 04:21:13.437698    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:21:13.448328    3949 logs.go:276] 1 containers: [6268fe998982]
	I0819 04:21:13.448401    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:21:13.462656    3949 logs.go:276] 1 containers: [0e2a041f6a1c]
	I0819 04:21:13.462718    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:21:13.472956    3949 logs.go:276] 0 containers: []
	W0819 04:21:13.472968    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:21:13.473028    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:21:13.483602    3949 logs.go:276] 1 containers: [ce9e3ca02329]
	I0819 04:21:13.483622    3949 logs.go:123] Gathering logs for coredns [76bba5139c4a] ...
	I0819 04:21:13.483629    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bba5139c4a"
	I0819 04:21:13.495074    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:21:13.495088    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:21:13.518455    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:21:13.518464    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:21:13.558506    3949 logs.go:123] Gathering logs for coredns [b8387e4e1e6c] ...
	I0819 04:21:13.558518    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8387e4e1e6c"
	I0819 04:21:13.579791    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:21:13.579801    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:21:13.617327    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:21:13.617426    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:21:13.617944    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:21:13.618031    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	I0819 04:21:13.619255    3949 logs.go:123] Gathering logs for kube-scheduler [ae35457314f6] ...
	I0819 04:21:13.619260    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae35457314f6"
	I0819 04:21:13.633752    3949 logs.go:123] Gathering logs for coredns [161fcc2cac7e] ...
	I0819 04:21:13.633767    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161fcc2cac7e"
	I0819 04:21:13.646115    3949 logs.go:123] Gathering logs for coredns [781c45adfd16] ...
	I0819 04:21:13.646126    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 781c45adfd16"
	I0819 04:21:13.662614    3949 logs.go:123] Gathering logs for kube-proxy [6268fe998982] ...
	I0819 04:21:13.662626    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6268fe998982"
	I0819 04:21:13.674123    3949 logs.go:123] Gathering logs for kube-controller-manager [0e2a041f6a1c] ...
	I0819 04:21:13.674137    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2a041f6a1c"
	I0819 04:21:13.691606    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:21:13.691616    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:21:13.706895    3949 logs.go:123] Gathering logs for kube-apiserver [a0805f9c4c2c] ...
	I0819 04:21:13.706906    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0805f9c4c2c"
	I0819 04:21:13.723794    3949 logs.go:123] Gathering logs for etcd [8b26c07e9e7f] ...
	I0819 04:21:13.723805    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b26c07e9e7f"
	I0819 04:21:13.738359    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:21:13.738370    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:21:13.742812    3949 logs.go:123] Gathering logs for storage-provisioner [ce9e3ca02329] ...
	I0819 04:21:13.742820    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9e3ca02329"
	I0819 04:21:13.755466    3949 out.go:358] Setting ErrFile to fd 2...
	I0819 04:21:13.755478    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:21:13.755505    3949 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0819 04:21:13.755511    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	  Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:21:13.755513    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	  Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:21:13.755517    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	  Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:21:13.755520    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	  Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	I0819 04:21:13.755523    3949 out.go:358] Setting ErrFile to fd 2...
	I0819 04:21:13.755526    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:21:23.759276    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:21:28.761597    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:21:28.761788    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:21:28.777378    3949 logs.go:276] 1 containers: [a0805f9c4c2c]
	I0819 04:21:28.777457    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:21:28.792673    3949 logs.go:276] 1 containers: [8b26c07e9e7f]
	I0819 04:21:28.792751    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:21:28.803921    3949 logs.go:276] 4 containers: [b8387e4e1e6c 76bba5139c4a 161fcc2cac7e 781c45adfd16]
	I0819 04:21:28.803994    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:21:28.816713    3949 logs.go:276] 1 containers: [ae35457314f6]
	I0819 04:21:28.816784    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:21:28.827240    3949 logs.go:276] 1 containers: [6268fe998982]
	I0819 04:21:28.827305    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:21:28.838451    3949 logs.go:276] 1 containers: [0e2a041f6a1c]
	I0819 04:21:28.838517    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:21:28.849286    3949 logs.go:276] 0 containers: []
	W0819 04:21:28.849297    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:21:28.849354    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:21:28.860252    3949 logs.go:276] 1 containers: [ce9e3ca02329]
	I0819 04:21:28.860269    3949 logs.go:123] Gathering logs for coredns [781c45adfd16] ...
	I0819 04:21:28.860274    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 781c45adfd16"
	I0819 04:21:28.873208    3949 logs.go:123] Gathering logs for kube-scheduler [ae35457314f6] ...
	I0819 04:21:28.873220    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae35457314f6"
	I0819 04:21:28.888780    3949 logs.go:123] Gathering logs for kube-controller-manager [0e2a041f6a1c] ...
	I0819 04:21:28.888790    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2a041f6a1c"
	I0819 04:21:28.906616    3949 logs.go:123] Gathering logs for coredns [b8387e4e1e6c] ...
	I0819 04:21:28.906626    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8387e4e1e6c"
	I0819 04:21:28.918418    3949 logs.go:123] Gathering logs for coredns [161fcc2cac7e] ...
	I0819 04:21:28.918429    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161fcc2cac7e"
	I0819 04:21:28.930548    3949 logs.go:123] Gathering logs for storage-provisioner [ce9e3ca02329] ...
	I0819 04:21:28.930557    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9e3ca02329"
	I0819 04:21:28.946843    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:21:28.946852    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:21:28.983945    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:21:28.984042    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:21:28.984591    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:21:28.984682    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	I0819 04:21:28.985982    3949 logs.go:123] Gathering logs for kube-proxy [6268fe998982] ...
	I0819 04:21:28.985992    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6268fe998982"
	I0819 04:21:28.998615    3949 logs.go:123] Gathering logs for coredns [76bba5139c4a] ...
	I0819 04:21:28.998627    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bba5139c4a"
	I0819 04:21:29.009822    3949 logs.go:123] Gathering logs for kube-apiserver [a0805f9c4c2c] ...
	I0819 04:21:29.009835    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0805f9c4c2c"
	I0819 04:21:29.024141    3949 logs.go:123] Gathering logs for etcd [8b26c07e9e7f] ...
	I0819 04:21:29.024152    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b26c07e9e7f"
	I0819 04:21:29.039853    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:21:29.039863    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:21:29.064953    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:21:29.064964    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:21:29.076741    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:21:29.076753    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:21:29.081619    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:21:29.081628    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:21:29.116406    3949 out.go:358] Setting ErrFile to fd 2...
	I0819 04:21:29.116416    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:21:29.116445    3949 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0819 04:21:29.116451    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	  Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:21:29.116454    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	  Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:21:29.116458    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	  Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:21:29.116461    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	  Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	I0819 04:21:29.116465    3949 out.go:358] Setting ErrFile to fd 2...
	I0819 04:21:29.116468    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:21:39.120430    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:21:44.122545    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:21:44.122694    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:21:44.138767    3949 logs.go:276] 1 containers: [a0805f9c4c2c]
	I0819 04:21:44.138857    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:21:44.151882    3949 logs.go:276] 1 containers: [8b26c07e9e7f]
	I0819 04:21:44.151955    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:21:44.162828    3949 logs.go:276] 4 containers: [b8387e4e1e6c 76bba5139c4a 161fcc2cac7e 781c45adfd16]
	I0819 04:21:44.162904    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:21:44.173440    3949 logs.go:276] 1 containers: [ae35457314f6]
	I0819 04:21:44.173511    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:21:44.191772    3949 logs.go:276] 1 containers: [6268fe998982]
	I0819 04:21:44.191842    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:21:44.202324    3949 logs.go:276] 1 containers: [0e2a041f6a1c]
	I0819 04:21:44.202393    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:21:44.212781    3949 logs.go:276] 0 containers: []
	W0819 04:21:44.212792    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:21:44.212849    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:21:44.223626    3949 logs.go:276] 1 containers: [ce9e3ca02329]
	I0819 04:21:44.223643    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:21:44.223649    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:21:44.248846    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:21:44.248856    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:21:44.253351    3949 logs.go:123] Gathering logs for coredns [161fcc2cac7e] ...
	I0819 04:21:44.253361    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161fcc2cac7e"
	I0819 04:21:44.265388    3949 logs.go:123] Gathering logs for kube-proxy [6268fe998982] ...
	I0819 04:21:44.265401    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6268fe998982"
	I0819 04:21:44.277165    3949 logs.go:123] Gathering logs for etcd [8b26c07e9e7f] ...
	I0819 04:21:44.277176    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b26c07e9e7f"
	I0819 04:21:44.291209    3949 logs.go:123] Gathering logs for coredns [b8387e4e1e6c] ...
	I0819 04:21:44.291220    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8387e4e1e6c"
	I0819 04:21:44.303477    3949 logs.go:123] Gathering logs for coredns [781c45adfd16] ...
	I0819 04:21:44.303491    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 781c45adfd16"
	I0819 04:21:44.315104    3949 logs.go:123] Gathering logs for coredns [76bba5139c4a] ...
	I0819 04:21:44.315114    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bba5139c4a"
	I0819 04:21:44.326226    3949 logs.go:123] Gathering logs for kube-scheduler [ae35457314f6] ...
	I0819 04:21:44.326237    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae35457314f6"
	I0819 04:21:44.341458    3949 logs.go:123] Gathering logs for kube-controller-manager [0e2a041f6a1c] ...
	I0819 04:21:44.341472    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2a041f6a1c"
	I0819 04:21:44.359646    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:21:44.359656    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:21:44.371124    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:21:44.371137    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:21:44.408183    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:21:44.408281    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:21:44.408814    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:21:44.408902    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	I0819 04:21:44.410206    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:21:44.410210    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:21:44.445255    3949 logs.go:123] Gathering logs for kube-apiserver [a0805f9c4c2c] ...
	I0819 04:21:44.445266    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0805f9c4c2c"
	I0819 04:21:44.459455    3949 logs.go:123] Gathering logs for storage-provisioner [ce9e3ca02329] ...
	I0819 04:21:44.459468    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9e3ca02329"
	I0819 04:21:44.471518    3949 out.go:358] Setting ErrFile to fd 2...
	I0819 04:21:44.471529    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:21:44.471555    3949 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0819 04:21:44.471559    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	  Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:21:44.471562    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	  Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:21:44.471566    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	  Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:21:44.471569    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	  Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	I0819 04:21:44.471579    3949 out.go:358] Setting ErrFile to fd 2...
	I0819 04:21:44.471582    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:21:54.475581    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:21:59.477802    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:21:59.477910    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:21:59.490084    3949 logs.go:276] 1 containers: [a0805f9c4c2c]
	I0819 04:21:59.490160    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:21:59.501911    3949 logs.go:276] 1 containers: [8b26c07e9e7f]
	I0819 04:21:59.501995    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:21:59.515965    3949 logs.go:276] 4 containers: [b8387e4e1e6c 76bba5139c4a 161fcc2cac7e 781c45adfd16]
	I0819 04:21:59.516038    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:21:59.527067    3949 logs.go:276] 1 containers: [ae35457314f6]
	I0819 04:21:59.527144    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:21:59.539445    3949 logs.go:276] 1 containers: [6268fe998982]
	I0819 04:21:59.539515    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:21:59.550215    3949 logs.go:276] 1 containers: [0e2a041f6a1c]
	I0819 04:21:59.550290    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:21:59.563937    3949 logs.go:276] 0 containers: []
	W0819 04:21:59.563955    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:21:59.564020    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:21:59.579278    3949 logs.go:276] 1 containers: [ce9e3ca02329]
	I0819 04:21:59.579295    3949 logs.go:123] Gathering logs for coredns [b8387e4e1e6c] ...
	I0819 04:21:59.579302    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8387e4e1e6c"
	I0819 04:21:59.591611    3949 logs.go:123] Gathering logs for kube-proxy [6268fe998982] ...
	I0819 04:21:59.591621    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6268fe998982"
	I0819 04:21:59.606872    3949 logs.go:123] Gathering logs for storage-provisioner [ce9e3ca02329] ...
	I0819 04:21:59.606887    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9e3ca02329"
	I0819 04:21:59.637010    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:21:59.637022    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:21:59.661381    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:21:59.661395    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:21:59.697531    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:21:59.697626    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:21:59.698160    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:21:59.698247    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	I0819 04:21:59.699521    3949 logs.go:123] Gathering logs for kube-apiserver [a0805f9c4c2c] ...
	I0819 04:21:59.699531    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0805f9c4c2c"
	I0819 04:21:59.714085    3949 logs.go:123] Gathering logs for kube-scheduler [ae35457314f6] ...
	I0819 04:21:59.714094    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae35457314f6"
	I0819 04:21:59.729351    3949 logs.go:123] Gathering logs for kube-controller-manager [0e2a041f6a1c] ...
	I0819 04:21:59.729363    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2a041f6a1c"
	I0819 04:21:59.747525    3949 logs.go:123] Gathering logs for coredns [76bba5139c4a] ...
	I0819 04:21:59.747536    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bba5139c4a"
	I0819 04:21:59.759537    3949 logs.go:123] Gathering logs for coredns [161fcc2cac7e] ...
	I0819 04:21:59.759552    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161fcc2cac7e"
	I0819 04:21:59.771419    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:21:59.771432    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:21:59.783551    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:21:59.783566    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:21:59.788612    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:21:59.788621    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:21:59.826528    3949 logs.go:123] Gathering logs for etcd [8b26c07e9e7f] ...
	I0819 04:21:59.826540    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b26c07e9e7f"
	I0819 04:21:59.841153    3949 logs.go:123] Gathering logs for coredns [781c45adfd16] ...
	I0819 04:21:59.841165    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 781c45adfd16"
	I0819 04:21:59.854011    3949 out.go:358] Setting ErrFile to fd 2...
	I0819 04:21:59.854021    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:21:59.854049    3949 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0819 04:21:59.854054    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	  Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:21:59.854058    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	  Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:21:59.854063    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	  Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:21:59.854067    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	  Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	I0819 04:21:59.854070    3949 out.go:358] Setting ErrFile to fd 2...
	I0819 04:21:59.854073    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:22:09.858092    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:22:14.860359    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:22:14.864745    3949 out.go:201] 
	W0819 04:22:14.868836    3949 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0819 04:22:14.868845    3949 out.go:270] * 
	* 
	W0819 04:22:14.869413    3949 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:22:14.884743    3949 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-079000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-08-19 04:22:14.963697 -0700 PDT m=+2843.187800834
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-079000 -n running-upgrade-079000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-079000 -n running-upgrade-079000: exit status 2 (15.693279416s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-079000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-955000          | force-systemd-flag-955000 | jenkins | v1.33.1 | 19 Aug 24 04:12 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-413000              | force-systemd-env-413000  | jenkins | v1.33.1 | 19 Aug 24 04:12 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-413000           | force-systemd-env-413000  | jenkins | v1.33.1 | 19 Aug 24 04:12 PDT | 19 Aug 24 04:12 PDT |
	| start   | -p docker-flags-366000                | docker-flags-366000       | jenkins | v1.33.1 | 19 Aug 24 04:12 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-955000             | force-systemd-flag-955000 | jenkins | v1.33.1 | 19 Aug 24 04:12 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-955000          | force-systemd-flag-955000 | jenkins | v1.33.1 | 19 Aug 24 04:12 PDT | 19 Aug 24 04:12 PDT |
	| start   | -p cert-expiration-371000             | cert-expiration-371000    | jenkins | v1.33.1 | 19 Aug 24 04:12 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-366000 ssh               | docker-flags-366000       | jenkins | v1.33.1 | 19 Aug 24 04:12 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-366000 ssh               | docker-flags-366000       | jenkins | v1.33.1 | 19 Aug 24 04:12 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-366000                | docker-flags-366000       | jenkins | v1.33.1 | 19 Aug 24 04:12 PDT | 19 Aug 24 04:12 PDT |
	| start   | -p cert-options-148000                | cert-options-148000       | jenkins | v1.33.1 | 19 Aug 24 04:12 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-148000 ssh               | cert-options-148000       | jenkins | v1.33.1 | 19 Aug 24 04:12 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-148000 -- sudo        | cert-options-148000       | jenkins | v1.33.1 | 19 Aug 24 04:12 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-148000                | cert-options-148000       | jenkins | v1.33.1 | 19 Aug 24 04:12 PDT | 19 Aug 24 04:12 PDT |
	| start   | -p running-upgrade-079000             | minikube                  | jenkins | v1.26.0 | 19 Aug 24 04:12 PDT | 19 Aug 24 04:13 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-079000             | running-upgrade-079000    | jenkins | v1.33.1 | 19 Aug 24 04:13 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-371000             | cert-expiration-371000    | jenkins | v1.33.1 | 19 Aug 24 04:15 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-371000             | cert-expiration-371000    | jenkins | v1.33.1 | 19 Aug 24 04:15 PDT | 19 Aug 24 04:15 PDT |
	| start   | -p kubernetes-upgrade-262000          | kubernetes-upgrade-262000 | jenkins | v1.33.1 | 19 Aug 24 04:15 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-262000          | kubernetes-upgrade-262000 | jenkins | v1.33.1 | 19 Aug 24 04:15 PDT | 19 Aug 24 04:15 PDT |
	| start   | -p kubernetes-upgrade-262000          | kubernetes-upgrade-262000 | jenkins | v1.33.1 | 19 Aug 24 04:15 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-262000          | kubernetes-upgrade-262000 | jenkins | v1.33.1 | 19 Aug 24 04:15 PDT | 19 Aug 24 04:15 PDT |
	| start   | -p stopped-upgrade-446000             | minikube                  | jenkins | v1.26.0 | 19 Aug 24 04:16 PDT | 19 Aug 24 04:16 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-446000 stop           | minikube                  | jenkins | v1.26.0 | 19 Aug 24 04:16 PDT | 19 Aug 24 04:17 PDT |
	| start   | -p stopped-upgrade-446000             | stopped-upgrade-446000    | jenkins | v1.33.1 | 19 Aug 24 04:17 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 04:17:01
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 04:17:01.766790    4093 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:17:01.766927    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:17:01.766930    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:17:01.766933    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:17:01.767114    4093 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:17:01.768270    4093 out.go:352] Setting JSON to false
	I0819 04:17:01.785825    4093 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2784,"bootTime":1724063437,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0819 04:17:01.785908    4093 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:17:01.789201    4093 out.go:177] * [stopped-upgrade-446000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:17:01.797076    4093 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 04:17:01.797133    4093 notify.go:220] Checking for updates...
	I0819 04:17:01.804994    4093 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	I0819 04:17:01.808094    4093 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:17:01.811134    4093 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:17:01.812525    4093 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	I0819 04:17:01.816062    4093 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:17:01.819328    4093 config.go:182] Loaded profile config "stopped-upgrade-446000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 04:17:01.823147    4093 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0819 04:17:01.826066    4093 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:17:01.829068    4093 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 04:17:01.836088    4093 start.go:297] selected driver: qemu2
	I0819 04:17:01.836093    4093 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-446000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50464 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-446000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0819 04:17:01.836163    4093 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:17:01.838866    4093 cni.go:84] Creating CNI manager for ""
	I0819 04:17:01.838885    4093 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:17:01.838914    4093 start.go:340] cluster config:
	{Name:stopped-upgrade-446000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50464 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-446000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0819 04:17:01.838973    4093 iso.go:125] acquiring lock: {Name:mk9bbf20f477d4c64990a7e4e7281f35cf7cfcc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:17:01.847024    4093 out.go:177] * Starting "stopped-upgrade-446000" primary control-plane node in "stopped-upgrade-446000" cluster
	I0819 04:17:01.851089    4093 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0819 04:17:01.851107    4093 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0819 04:17:01.851116    4093 cache.go:56] Caching tarball of preloaded images
	I0819 04:17:01.851180    4093 preload.go:172] Found /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:17:01.851186    4093 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0819 04:17:01.851254    4093 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/stopped-upgrade-446000/config.json ...
	I0819 04:17:01.851712    4093 start.go:360] acquireMachinesLock for stopped-upgrade-446000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:17:01.851748    4093 start.go:364] duration metric: took 29.583µs to acquireMachinesLock for "stopped-upgrade-446000"
	I0819 04:17:01.851758    4093 start.go:96] Skipping create...Using existing machine configuration
	I0819 04:17:01.851763    4093 fix.go:54] fixHost starting: 
	I0819 04:17:01.851879    4093 fix.go:112] recreateIfNeeded on stopped-upgrade-446000: state=Stopped err=<nil>
	W0819 04:17:01.851888    4093 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 04:17:01.860026    4093 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-446000" ...
	I0819 04:17:00.098658    3949 logs.go:123] Gathering logs for kube-scheduler [b19a94fd47ab] ...
	I0819 04:17:00.098669    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b19a94fd47ab"
	I0819 04:17:00.113752    3949 logs.go:123] Gathering logs for storage-provisioner [0d999e2f9c91] ...
	I0819 04:17:00.113765    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d999e2f9c91"
	I0819 04:17:00.125217    3949 logs.go:123] Gathering logs for kube-apiserver [82e016e3639d] ...
	I0819 04:17:00.125229    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82e016e3639d"
	I0819 04:17:00.145118    3949 logs.go:123] Gathering logs for etcd [cea274700c6b] ...
	I0819 04:17:00.145131    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cea274700c6b"
	I0819 04:17:00.162537    3949 logs.go:123] Gathering logs for kube-proxy [9f601f76c443] ...
	I0819 04:17:00.162549    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f601f76c443"
	I0819 04:17:00.174556    3949 logs.go:123] Gathering logs for kube-controller-manager [19fa56b6b5d8] ...
	I0819 04:17:00.174568    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fa56b6b5d8"
	I0819 04:17:00.191366    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:17:00.191377    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:17:00.225639    3949 logs.go:123] Gathering logs for kube-apiserver [e6e08462a43e] ...
	I0819 04:17:00.225651    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e08462a43e"
	I0819 04:17:00.240645    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:17:00.240656    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:17:02.765088    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:17:01.864115    4093 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:17:01.864200    4093 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/stopped-upgrade-446000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/stopped-upgrade-446000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/stopped-upgrade-446000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50429-:22,hostfwd=tcp::50430-:2376,hostname=stopped-upgrade-446000 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/stopped-upgrade-446000/disk.qcow2
	I0819 04:17:01.910694    4093 main.go:141] libmachine: STDOUT: 
	I0819 04:17:01.910721    4093 main.go:141] libmachine: STDERR: 
	I0819 04:17:01.910726    4093 main.go:141] libmachine: Waiting for VM to start (ssh -p 50429 docker@127.0.0.1)...
	I0819 04:17:07.767700    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:17:07.767899    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:17:07.779690    3949 logs.go:276] 2 containers: [e6e08462a43e 82e016e3639d]
	I0819 04:17:07.779767    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:17:07.790588    3949 logs.go:276] 2 containers: [124abd52fd44 cea274700c6b]
	I0819 04:17:07.790668    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:17:07.801725    3949 logs.go:276] 1 containers: [086adbfeded2]
	I0819 04:17:07.801801    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:17:07.815367    3949 logs.go:276] 2 containers: [6362a51486fb b19a94fd47ab]
	I0819 04:17:07.815439    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:17:07.827061    3949 logs.go:276] 1 containers: [9f601f76c443]
	I0819 04:17:07.827130    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:17:07.838455    3949 logs.go:276] 2 containers: [19fa56b6b5d8 fcadb869ae9b]
	I0819 04:17:07.838518    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:17:07.848720    3949 logs.go:276] 0 containers: []
	W0819 04:17:07.848732    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:17:07.848795    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:17:07.859544    3949 logs.go:276] 2 containers: [0d999e2f9c91 f2aeab8371d3]
	I0819 04:17:07.859563    3949 logs.go:123] Gathering logs for etcd [124abd52fd44] ...
	I0819 04:17:07.859568    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124abd52fd44"
	I0819 04:17:07.882823    3949 logs.go:123] Gathering logs for kube-scheduler [b19a94fd47ab] ...
	I0819 04:17:07.882833    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b19a94fd47ab"
	I0819 04:17:07.898112    3949 logs.go:123] Gathering logs for kube-controller-manager [fcadb869ae9b] ...
	I0819 04:17:07.898123    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcadb869ae9b"
	I0819 04:17:07.910116    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:17:07.910126    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:17:07.933826    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:17:07.933837    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:17:07.971261    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:17:07.971281    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:17:07.975838    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:17:07.975847    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:17:08.013034    3949 logs.go:123] Gathering logs for kube-apiserver [82e016e3639d] ...
	I0819 04:17:08.013046    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82e016e3639d"
	I0819 04:17:08.034976    3949 logs.go:123] Gathering logs for coredns [086adbfeded2] ...
	I0819 04:17:08.034991    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086adbfeded2"
	I0819 04:17:08.047657    3949 logs.go:123] Gathering logs for kube-controller-manager [19fa56b6b5d8] ...
	I0819 04:17:08.047669    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fa56b6b5d8"
	I0819 04:17:08.070237    3949 logs.go:123] Gathering logs for storage-provisioner [0d999e2f9c91] ...
	I0819 04:17:08.070254    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d999e2f9c91"
	I0819 04:17:08.082242    3949 logs.go:123] Gathering logs for storage-provisioner [f2aeab8371d3] ...
	I0819 04:17:08.082254    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2aeab8371d3"
	I0819 04:17:08.094278    3949 logs.go:123] Gathering logs for kube-apiserver [e6e08462a43e] ...
	I0819 04:17:08.094291    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e08462a43e"
	I0819 04:17:08.111257    3949 logs.go:123] Gathering logs for etcd [cea274700c6b] ...
	I0819 04:17:08.111267    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cea274700c6b"
	I0819 04:17:08.133450    3949 logs.go:123] Gathering logs for kube-scheduler [6362a51486fb] ...
	I0819 04:17:08.133468    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6362a51486fb"
	I0819 04:17:08.146424    3949 logs.go:123] Gathering logs for kube-proxy [9f601f76c443] ...
	I0819 04:17:08.146434    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f601f76c443"
	I0819 04:17:08.158674    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:17:08.158685    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:17:10.673840    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:17:15.676077    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:17:15.676273    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:17:15.687794    3949 logs.go:276] 2 containers: [e6e08462a43e 82e016e3639d]
	I0819 04:17:15.687877    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:17:15.699006    3949 logs.go:276] 2 containers: [124abd52fd44 cea274700c6b]
	I0819 04:17:15.699097    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:17:15.709837    3949 logs.go:276] 1 containers: [086adbfeded2]
	I0819 04:17:15.709912    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:17:15.721476    3949 logs.go:276] 2 containers: [6362a51486fb b19a94fd47ab]
	I0819 04:17:15.721544    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:17:15.731970    3949 logs.go:276] 1 containers: [9f601f76c443]
	I0819 04:17:15.732041    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:17:15.743108    3949 logs.go:276] 2 containers: [19fa56b6b5d8 fcadb869ae9b]
	I0819 04:17:15.743171    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:17:15.753866    3949 logs.go:276] 0 containers: []
	W0819 04:17:15.753877    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:17:15.753939    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:17:15.771085    3949 logs.go:276] 2 containers: [0d999e2f9c91 f2aeab8371d3]
	I0819 04:17:15.771103    3949 logs.go:123] Gathering logs for kube-apiserver [82e016e3639d] ...
	I0819 04:17:15.771108    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82e016e3639d"
	I0819 04:17:15.791120    3949 logs.go:123] Gathering logs for kube-scheduler [b19a94fd47ab] ...
	I0819 04:17:15.791130    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b19a94fd47ab"
	I0819 04:17:15.810391    3949 logs.go:123] Gathering logs for storage-provisioner [0d999e2f9c91] ...
	I0819 04:17:15.810401    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d999e2f9c91"
	I0819 04:17:15.821815    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:17:15.821826    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:17:15.826524    3949 logs.go:123] Gathering logs for kube-scheduler [6362a51486fb] ...
	I0819 04:17:15.826534    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6362a51486fb"
	I0819 04:17:15.838606    3949 logs.go:123] Gathering logs for kube-controller-manager [19fa56b6b5d8] ...
	I0819 04:17:15.838619    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fa56b6b5d8"
	I0819 04:17:15.857168    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:17:15.857178    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:17:15.894190    3949 logs.go:123] Gathering logs for kube-apiserver [e6e08462a43e] ...
	I0819 04:17:15.894198    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e08462a43e"
	I0819 04:17:15.908781    3949 logs.go:123] Gathering logs for etcd [124abd52fd44] ...
	I0819 04:17:15.908794    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124abd52fd44"
	I0819 04:17:15.922947    3949 logs.go:123] Gathering logs for storage-provisioner [f2aeab8371d3] ...
	I0819 04:17:15.922958    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2aeab8371d3"
	I0819 04:17:15.934171    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:17:15.934180    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:17:15.956959    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:17:15.956969    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:17:15.969214    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:17:15.969227    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:17:16.006675    3949 logs.go:123] Gathering logs for etcd [cea274700c6b] ...
	I0819 04:17:16.006687    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cea274700c6b"
	I0819 04:17:16.029943    3949 logs.go:123] Gathering logs for coredns [086adbfeded2] ...
	I0819 04:17:16.029953    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086adbfeded2"
	I0819 04:17:16.041792    3949 logs.go:123] Gathering logs for kube-proxy [9f601f76c443] ...
	I0819 04:17:16.041803    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f601f76c443"
	I0819 04:17:16.054558    3949 logs.go:123] Gathering logs for kube-controller-manager [fcadb869ae9b] ...
	I0819 04:17:16.054569    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcadb869ae9b"
	I0819 04:17:18.568229    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:17:21.402328    4093 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/stopped-upgrade-446000/config.json ...
	I0819 04:17:21.402876    4093 machine.go:93] provisionDockerMachine start ...
	I0819 04:17:21.402996    4093 main.go:141] libmachine: Using SSH client type: native
	I0819 04:17:21.403330    4093 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d7c5a0] 0x100d7ee00 <nil>  [] 0s} localhost 50429 <nil> <nil>}
	I0819 04:17:21.403340    4093 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 04:17:21.483712    4093 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 04:17:21.483750    4093 buildroot.go:166] provisioning hostname "stopped-upgrade-446000"
	I0819 04:17:21.483853    4093 main.go:141] libmachine: Using SSH client type: native
	I0819 04:17:21.484101    4093 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d7c5a0] 0x100d7ee00 <nil>  [] 0s} localhost 50429 <nil> <nil>}
	I0819 04:17:21.484112    4093 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-446000 && echo "stopped-upgrade-446000" | sudo tee /etc/hostname
	I0819 04:17:21.566078    4093 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-446000
	
	I0819 04:17:21.566178    4093 main.go:141] libmachine: Using SSH client type: native
	I0819 04:17:21.566373    4093 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d7c5a0] 0x100d7ee00 <nil>  [] 0s} localhost 50429 <nil> <nil>}
	I0819 04:17:21.566388    4093 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-446000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-446000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-446000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 04:17:21.638067    4093 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 04:17:21.638086    4093 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19476-967/.minikube CaCertPath:/Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19476-967/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19476-967/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19476-967/.minikube}
	I0819 04:17:21.638103    4093 buildroot.go:174] setting up certificates
	I0819 04:17:21.638112    4093 provision.go:84] configureAuth start
	I0819 04:17:21.638120    4093 provision.go:143] copyHostCerts
	I0819 04:17:21.638217    4093 exec_runner.go:144] found /Users/jenkins/minikube-integration/19476-967/.minikube/cert.pem, removing ...
	I0819 04:17:21.638280    4093 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19476-967/.minikube/cert.pem
	I0819 04:17:21.638424    4093 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19476-967/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19476-967/.minikube/cert.pem (1123 bytes)
	I0819 04:17:21.638695    4093 exec_runner.go:144] found /Users/jenkins/minikube-integration/19476-967/.minikube/key.pem, removing ...
	I0819 04:17:21.638700    4093 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19476-967/.minikube/key.pem
	I0819 04:17:21.638774    4093 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19476-967/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19476-967/.minikube/key.pem (1675 bytes)
	I0819 04:17:21.638924    4093 exec_runner.go:144] found /Users/jenkins/minikube-integration/19476-967/.minikube/ca.pem, removing ...
	I0819 04:17:21.638929    4093 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19476-967/.minikube/ca.pem
	I0819 04:17:21.639002    4093 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19476-967/.minikube/ca.pem (1078 bytes)
	I0819 04:17:21.639135    4093 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19476-967/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-446000 san=[127.0.0.1 localhost minikube stopped-upgrade-446000]
	I0819 04:17:21.684969    4093 provision.go:177] copyRemoteCerts
	I0819 04:17:21.684998    4093 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 04:17:21.685005    4093 sshutil.go:53] new ssh client: &{IP:localhost Port:50429 SSHKeyPath:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/stopped-upgrade-446000/id_rsa Username:docker}
	I0819 04:17:21.719811    4093 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 04:17:21.726575    4093 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0819 04:17:21.735134    4093 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 04:17:21.741997    4093 provision.go:87] duration metric: took 103.880583ms to configureAuth
	I0819 04:17:21.742006    4093 buildroot.go:189] setting minikube options for container-runtime
	I0819 04:17:21.742132    4093 config.go:182] Loaded profile config "stopped-upgrade-446000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 04:17:21.742164    4093 main.go:141] libmachine: Using SSH client type: native
	I0819 04:17:21.742251    4093 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d7c5a0] 0x100d7ee00 <nil>  [] 0s} localhost 50429 <nil> <nil>}
	I0819 04:17:21.742256    4093 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 04:17:21.805743    4093 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 04:17:21.805753    4093 buildroot.go:70] root file system type: tmpfs
	I0819 04:17:21.805801    4093 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 04:17:21.805892    4093 main.go:141] libmachine: Using SSH client type: native
	I0819 04:17:21.806003    4093 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d7c5a0] 0x100d7ee00 <nil>  [] 0s} localhost 50429 <nil> <nil>}
	I0819 04:17:21.806037    4093 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 04:17:21.873736    4093 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 04:17:21.873790    4093 main.go:141] libmachine: Using SSH client type: native
	I0819 04:17:21.873912    4093 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d7c5a0] 0x100d7ee00 <nil>  [] 0s} localhost 50429 <nil> <nil>}
	I0819 04:17:21.873921    4093 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 04:17:22.218616    4093 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0819 04:17:22.218628    4093 machine.go:96] duration metric: took 815.752125ms to provisionDockerMachine
	I0819 04:17:22.218639    4093 start.go:293] postStartSetup for "stopped-upgrade-446000" (driver="qemu2")
	I0819 04:17:22.218646    4093 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 04:17:22.218725    4093 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 04:17:22.218734    4093 sshutil.go:53] new ssh client: &{IP:localhost Port:50429 SSHKeyPath:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/stopped-upgrade-446000/id_rsa Username:docker}
	I0819 04:17:22.251829    4093 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 04:17:22.253061    4093 info.go:137] Remote host: Buildroot 2021.02.12
	I0819 04:17:22.253069    4093 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19476-967/.minikube/addons for local assets ...
	I0819 04:17:22.253153    4093 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19476-967/.minikube/files for local assets ...
	I0819 04:17:22.253273    4093 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19476-967/.minikube/files/etc/ssl/certs/14342.pem -> 14342.pem in /etc/ssl/certs
	I0819 04:17:22.253395    4093 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 04:17:22.256378    4093 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/files/etc/ssl/certs/14342.pem --> /etc/ssl/certs/14342.pem (1708 bytes)
	I0819 04:17:22.263072    4093 start.go:296] duration metric: took 44.428583ms for postStartSetup
	I0819 04:17:22.263087    4093 fix.go:56] duration metric: took 20.411581833s for fixHost
	I0819 04:17:22.263122    4093 main.go:141] libmachine: Using SSH client type: native
	I0819 04:17:22.263228    4093 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d7c5a0] 0x100d7ee00 <nil>  [] 0s} localhost 50429 <nil> <nil>}
	I0819 04:17:22.263232    4093 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 04:17:22.325534    4093 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724066242.656644629
	
	I0819 04:17:22.325544    4093 fix.go:216] guest clock: 1724066242.656644629
	I0819 04:17:22.325550    4093 fix.go:229] Guest: 2024-08-19 04:17:22.656644629 -0700 PDT Remote: 2024-08-19 04:17:22.263089 -0700 PDT m=+20.521331959 (delta=393.555629ms)
	I0819 04:17:22.325563    4093 fix.go:200] guest clock delta is within tolerance: 393.555629ms
	I0819 04:17:22.325566    4093 start.go:83] releasing machines lock for "stopped-upgrade-446000", held for 20.474071s
	I0819 04:17:22.325627    4093 ssh_runner.go:195] Run: cat /version.json
	I0819 04:17:22.325634    4093 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 04:17:22.325635    4093 sshutil.go:53] new ssh client: &{IP:localhost Port:50429 SSHKeyPath:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/stopped-upgrade-446000/id_rsa Username:docker}
	I0819 04:17:22.325655    4093 sshutil.go:53] new ssh client: &{IP:localhost Port:50429 SSHKeyPath:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/stopped-upgrade-446000/id_rsa Username:docker}
	W0819 04:17:22.326308    4093 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50429: connect: connection refused
	I0819 04:17:22.326335    4093 retry.go:31] will retry after 347.605855ms: dial tcp [::1]:50429: connect: connection refused
	W0819 04:17:22.731456    4093 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0819 04:17:22.731608    4093 ssh_runner.go:195] Run: systemctl --version
	I0819 04:17:22.735633    4093 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 04:17:22.739066    4093 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 04:17:22.739137    4093 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0819 04:17:22.745478    4093 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0819 04:17:22.753915    4093 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 04:17:22.753931    4093 start.go:495] detecting cgroup driver to use...
	I0819 04:17:22.754045    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 04:17:22.763865    4093 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0819 04:17:22.768262    4093 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 04:17:22.772578    4093 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 04:17:22.772609    4093 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 04:17:22.776564    4093 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 04:17:22.780414    4093 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 04:17:22.784019    4093 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 04:17:22.787365    4093 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 04:17:22.790279    4093 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 04:17:22.793239    4093 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 04:17:22.796493    4093 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 04:17:22.799998    4093 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 04:17:22.802871    4093 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 04:17:22.805641    4093 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 04:17:22.864616    4093 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 04:17:22.871007    4093 start.go:495] detecting cgroup driver to use...
	I0819 04:17:22.871074    4093 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 04:17:22.882345    4093 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 04:17:22.887445    4093 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 04:17:22.895905    4093 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 04:17:22.900490    4093 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 04:17:22.905114    4093 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 04:17:22.932539    4093 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 04:17:22.937494    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 04:17:22.942910    4093 ssh_runner.go:195] Run: which cri-dockerd
	I0819 04:17:22.944230    4093 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 04:17:22.947109    4093 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0819 04:17:22.952223    4093 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 04:17:23.018390    4093 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 04:17:23.082508    4093 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 04:17:23.082569    4093 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 04:17:23.087558    4093 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 04:17:23.145294    4093 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 04:17:24.271748    4093 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.126451667s)
	I0819 04:17:24.271805    4093 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0819 04:17:24.280069    4093 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0819 04:17:24.286029    4093 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 04:17:24.290805    4093 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0819 04:17:24.349524    4093 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0819 04:17:24.417619    4093 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 04:17:24.476438    4093 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0819 04:17:24.482294    4093 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 04:17:24.486692    4093 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 04:17:24.575800    4093 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0819 04:17:24.619211    4093 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0819 04:17:24.619303    4093 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0819 04:17:24.621228    4093 start.go:563] Will wait 60s for crictl version
	I0819 04:17:24.621274    4093 ssh_runner.go:195] Run: which crictl
	I0819 04:17:24.622738    4093 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 04:17:24.636891    4093 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0819 04:17:24.636957    4093 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 04:17:24.653133    4093 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 04:17:23.570989    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:17:23.571247    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:17:23.597221    3949 logs.go:276] 2 containers: [e6e08462a43e 82e016e3639d]
	I0819 04:17:23.597348    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:17:23.614812    3949 logs.go:276] 2 containers: [124abd52fd44 cea274700c6b]
	I0819 04:17:23.614897    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:17:23.628467    3949 logs.go:276] 1 containers: [086adbfeded2]
	I0819 04:17:23.628544    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:17:23.639842    3949 logs.go:276] 2 containers: [6362a51486fb b19a94fd47ab]
	I0819 04:17:23.639908    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:17:23.654364    3949 logs.go:276] 1 containers: [9f601f76c443]
	I0819 04:17:23.654440    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:17:23.665207    3949 logs.go:276] 2 containers: [19fa56b6b5d8 fcadb869ae9b]
	I0819 04:17:23.665274    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:17:23.675565    3949 logs.go:276] 0 containers: []
	W0819 04:17:23.675577    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:17:23.675633    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:17:23.686006    3949 logs.go:276] 2 containers: [0d999e2f9c91 f2aeab8371d3]
	I0819 04:17:23.686027    3949 logs.go:123] Gathering logs for kube-apiserver [82e016e3639d] ...
	I0819 04:17:23.686033    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82e016e3639d"
	I0819 04:17:23.706491    3949 logs.go:123] Gathering logs for coredns [086adbfeded2] ...
	I0819 04:17:23.706505    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086adbfeded2"
	I0819 04:17:23.718197    3949 logs.go:123] Gathering logs for kube-controller-manager [19fa56b6b5d8] ...
	I0819 04:17:23.718211    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fa56b6b5d8"
	I0819 04:17:23.735271    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:17:23.735281    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:17:23.771598    3949 logs.go:123] Gathering logs for kube-scheduler [6362a51486fb] ...
	I0819 04:17:23.771610    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6362a51486fb"
	I0819 04:17:23.784384    3949 logs.go:123] Gathering logs for kube-controller-manager [fcadb869ae9b] ...
	I0819 04:17:23.784394    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcadb869ae9b"
	I0819 04:17:23.796988    3949 logs.go:123] Gathering logs for storage-provisioner [f2aeab8371d3] ...
	I0819 04:17:23.797001    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2aeab8371d3"
	I0819 04:17:23.833886    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:17:23.833898    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:17:23.859280    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:17:23.859294    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:17:23.863952    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:17:23.863962    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:17:23.898883    3949 logs.go:123] Gathering logs for storage-provisioner [0d999e2f9c91] ...
	I0819 04:17:23.898896    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d999e2f9c91"
	I0819 04:17:23.915925    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:17:23.915936    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:17:23.928778    3949 logs.go:123] Gathering logs for kube-apiserver [e6e08462a43e] ...
	I0819 04:17:23.928793    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e08462a43e"
	I0819 04:17:23.943156    3949 logs.go:123] Gathering logs for etcd [124abd52fd44] ...
	I0819 04:17:23.943167    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124abd52fd44"
	I0819 04:17:23.959445    3949 logs.go:123] Gathering logs for etcd [cea274700c6b] ...
	I0819 04:17:23.959458    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cea274700c6b"
	I0819 04:17:23.977087    3949 logs.go:123] Gathering logs for kube-scheduler [b19a94fd47ab] ...
	I0819 04:17:23.977097    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b19a94fd47ab"
	I0819 04:17:23.991754    3949 logs.go:123] Gathering logs for kube-proxy [9f601f76c443] ...
	I0819 04:17:23.991766    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f601f76c443"
	I0819 04:17:24.678996    4093 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0819 04:17:24.679060    4093 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0819 04:17:24.680393    4093 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 04:17:24.684188    4093 kubeadm.go:883] updating cluster {Name:stopped-upgrade-446000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50464 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-446000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0819 04:17:24.684232    4093 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0819 04:17:24.684272    4093 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0819 04:17:24.694556    4093 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0819 04:17:24.694575    4093 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0819 04:17:24.694619    4093 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0819 04:17:24.697485    4093 ssh_runner.go:195] Run: which lz4
	I0819 04:17:24.698902    4093 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 04:17:24.700124    4093 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 04:17:24.700133    4093 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0819 04:17:25.646464    4093 docker.go:649] duration metric: took 947.6025ms to copy over tarball
	I0819 04:17:25.646525    4093 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 04:17:26.505397    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:17:26.808560    4093 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.162036s)
	I0819 04:17:26.808573    4093 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 04:17:26.824313    4093 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0819 04:17:26.828109    4093 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0819 04:17:26.833078    4093 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 04:17:26.902240    4093 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 04:17:29.212033    4093 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.309806375s)
	I0819 04:17:29.212143    4093 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0819 04:17:29.223914    4093 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0819 04:17:29.223923    4093 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0819 04:17:29.223927    4093 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 04:17:29.227778    4093 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 04:17:29.229435    4093 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0819 04:17:29.231267    4093 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0819 04:17:29.231810    4093 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 04:17:29.233244    4093 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0819 04:17:29.233527    4093 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 04:17:29.234996    4093 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0819 04:17:29.235025    4093 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0819 04:17:29.236981    4093 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 04:17:29.237002    4093 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0819 04:17:29.238613    4093 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0819 04:17:29.238668    4093 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 04:17:29.239633    4093 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0819 04:17:29.239714    4093 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0819 04:17:29.240456    4093 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 04:17:29.241031    4093 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0819 04:17:29.664630    4093 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0819 04:17:29.665515    4093 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0819 04:17:29.675408    4093 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 04:17:29.685046    4093 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0819 04:17:29.685088    4093 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0819 04:17:29.685047    4093 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0819 04:17:29.685121    4093 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0819 04:17:29.685143    4093 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0819 04:17:29.685151    4093 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0819 04:17:29.688893    4093 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0819 04:17:29.688917    4093 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 04:17:29.688971    4093 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 04:17:29.693295    4093 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0819 04:17:29.700610    4093 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0819 04:17:29.700653    4093 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0819 04:17:29.711932    4093 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0819 04:17:29.714999    4093 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0819 04:17:29.716267    4093 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0819 04:17:29.716286    4093 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0819 04:17:29.716318    4093 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0819 04:17:29.728123    4093 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0819 04:17:29.728147    4093 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0819 04:17:29.728203    4093 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0819 04:17:29.728826    4093 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0819 04:17:29.742488    4093 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0819 04:17:29.742610    4093 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0819 04:17:29.745045    4093 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0819 04:17:29.745058    4093 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0819 04:17:29.752999    4093 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0819 04:17:29.753009    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0819 04:17:29.753647    4093 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0819 04:17:29.753757    4093 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0819 04:17:29.763798    4093 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0819 04:17:29.790787    4093 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0819 04:17:29.790837    4093 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0819 04:17:29.790857    4093 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 04:17:29.790864    4093 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0819 04:17:29.790874    4093 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0819 04:17:29.790913    4093 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0819 04:17:29.790913    4093 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0819 04:17:29.815237    4093 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0819 04:17:29.815250    4093 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0819 04:17:29.815350    4093 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0819 04:17:29.816789    4093 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0819 04:17:29.816803    4093 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	W0819 04:17:29.851727    4093 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0819 04:17:29.851824    4093 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 04:17:29.855577    4093 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0819 04:17:29.855588    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0819 04:17:29.866791    4093 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0819 04:17:29.866816    4093 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 04:17:29.866883    4093 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 04:17:29.899966    4093 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0819 04:17:29.900008    4093 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0819 04:17:29.900112    4093 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0819 04:17:29.901581    4093 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0819 04:17:29.901594    4093 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0819 04:17:29.930333    4093 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0819 04:17:29.930348    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0819 04:17:30.168392    4093 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0819 04:17:30.168430    4093 cache_images.go:92] duration metric: took 944.508166ms to LoadCachedImages
	W0819 04:17:30.168464    4093 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0819 04:17:30.168471    4093 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0819 04:17:30.168523    4093 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-446000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-446000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 04:17:30.168591    4093 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0819 04:17:30.183720    4093 cni.go:84] Creating CNI manager for ""
	I0819 04:17:30.183732    4093 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:17:30.183738    4093 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 04:17:30.183747    4093 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-446000 NodeName:stopped-upgrade-446000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 04:17:30.183811    4093 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-446000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 04:17:30.183875    4093 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0819 04:17:30.187158    4093 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 04:17:30.187184    4093 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 04:17:30.189654    4093 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0819 04:17:30.194534    4093 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 04:17:30.199101    4093 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0819 04:17:30.204375    4093 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0819 04:17:30.205508    4093 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 04:17:30.209257    4093 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 04:17:30.274671    4093 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 04:17:30.279980    4093 certs.go:68] Setting up /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/stopped-upgrade-446000 for IP: 10.0.2.15
	I0819 04:17:30.279997    4093 certs.go:194] generating shared ca certs ...
	I0819 04:17:30.280005    4093 certs.go:226] acquiring lock for ca certs: {Name:mk0a363c308d59dcc2ce68f87ac07833cd4c8b8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:17:30.280165    4093 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19476-967/.minikube/ca.key
	I0819 04:17:30.280222    4093 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19476-967/.minikube/proxy-client-ca.key
	I0819 04:17:30.280227    4093 certs.go:256] generating profile certs ...
	I0819 04:17:30.280334    4093 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/stopped-upgrade-446000/client.key
	I0819 04:17:30.280353    4093 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/stopped-upgrade-446000/apiserver.key.79083a89
	I0819 04:17:30.280373    4093 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/stopped-upgrade-446000/apiserver.crt.79083a89 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0819 04:17:30.377677    4093 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/stopped-upgrade-446000/apiserver.crt.79083a89 ...
	I0819 04:17:30.377689    4093 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/stopped-upgrade-446000/apiserver.crt.79083a89: {Name:mk6e775c3f27064abb4a4684c0772522306ade8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:17:30.378130    4093 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/stopped-upgrade-446000/apiserver.key.79083a89 ...
	I0819 04:17:30.378142    4093 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/stopped-upgrade-446000/apiserver.key.79083a89: {Name:mkf086d304ce0538594ff4dfb6a94e5895aa61d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:17:30.378302    4093 certs.go:381] copying /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/stopped-upgrade-446000/apiserver.crt.79083a89 -> /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/stopped-upgrade-446000/apiserver.crt
	I0819 04:17:30.378442    4093 certs.go:385] copying /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/stopped-upgrade-446000/apiserver.key.79083a89 -> /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/stopped-upgrade-446000/apiserver.key
	I0819 04:17:30.378598    4093 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/stopped-upgrade-446000/proxy-client.key
	I0819 04:17:30.378729    4093 certs.go:484] found cert: /Users/jenkins/minikube-integration/19476-967/.minikube/certs/1434.pem (1338 bytes)
	W0819 04:17:30.378758    4093 certs.go:480] ignoring /Users/jenkins/minikube-integration/19476-967/.minikube/certs/1434_empty.pem, impossibly tiny 0 bytes
	I0819 04:17:30.378762    4093 certs.go:484] found cert: /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 04:17:30.378783    4093 certs.go:484] found cert: /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem (1078 bytes)
	I0819 04:17:30.378801    4093 certs.go:484] found cert: /Users/jenkins/minikube-integration/19476-967/.minikube/certs/cert.pem (1123 bytes)
	I0819 04:17:30.378819    4093 certs.go:484] found cert: /Users/jenkins/minikube-integration/19476-967/.minikube/certs/key.pem (1675 bytes)
	I0819 04:17:30.378859    4093 certs.go:484] found cert: /Users/jenkins/minikube-integration/19476-967/.minikube/files/etc/ssl/certs/14342.pem (1708 bytes)
	I0819 04:17:30.379215    4093 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 04:17:30.386119    4093 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0819 04:17:30.393016    4093 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 04:17:30.400444    4093 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 04:17:30.407972    4093 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/stopped-upgrade-446000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0819 04:17:30.414791    4093 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/stopped-upgrade-446000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 04:17:30.421593    4093 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/stopped-upgrade-446000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 04:17:30.429346    4093 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/stopped-upgrade-446000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 04:17:30.436845    4093 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/certs/1434.pem --> /usr/share/ca-certificates/1434.pem (1338 bytes)
	I0819 04:17:30.444461    4093 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/files/etc/ssl/certs/14342.pem --> /usr/share/ca-certificates/14342.pem (1708 bytes)
	I0819 04:17:30.451710    4093 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 04:17:30.458686    4093 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 04:17:30.463635    4093 ssh_runner.go:195] Run: openssl version
	I0819 04:17:30.465607    4093 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1434.pem && ln -fs /usr/share/ca-certificates/1434.pem /etc/ssl/certs/1434.pem"
	I0819 04:17:30.469105    4093 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1434.pem
	I0819 04:17:30.470648    4093 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 10:42 /usr/share/ca-certificates/1434.pem
	I0819 04:17:30.470673    4093 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1434.pem
	I0819 04:17:30.472447    4093 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1434.pem /etc/ssl/certs/51391683.0"
	I0819 04:17:30.475642    4093 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14342.pem && ln -fs /usr/share/ca-certificates/14342.pem /etc/ssl/certs/14342.pem"
	I0819 04:17:30.478559    4093 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14342.pem
	I0819 04:17:30.479861    4093 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 10:42 /usr/share/ca-certificates/14342.pem
	I0819 04:17:30.479880    4093 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14342.pem
	I0819 04:17:30.481644    4093 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14342.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 04:17:30.484919    4093 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 04:17:30.488398    4093 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 04:17:30.489941    4093 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 10:35 /usr/share/ca-certificates/minikubeCA.pem
	I0819 04:17:30.489960    4093 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 04:17:30.491827    4093 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 04:17:30.494800    4093 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 04:17:30.496340    4093 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 04:17:30.498421    4093 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 04:17:30.500183    4093 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 04:17:30.502259    4093 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 04:17:30.504070    4093 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 04:17:30.506050    4093 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 04:17:30.507855    4093 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-446000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50464 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-446000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0819 04:17:30.507916    4093 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0819 04:17:30.518237    4093 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 04:17:30.521236    4093 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 04:17:30.521241    4093 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 04:17:30.521270    4093 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 04:17:30.525085    4093 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 04:17:30.525393    4093 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-446000" does not appear in /Users/jenkins/minikube-integration/19476-967/kubeconfig
	I0819 04:17:30.525486    4093 kubeconfig.go:62] /Users/jenkins/minikube-integration/19476-967/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-446000" cluster setting kubeconfig missing "stopped-upgrade-446000" context setting]
	I0819 04:17:30.525678    4093 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19476-967/kubeconfig: {Name:mkcc8b27cbda2ef567c4911aa335c1e1951a7d2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:17:30.526123    4093 kapi.go:59] client config for stopped-upgrade-446000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19476-967/.minikube/profiles/stopped-upgrade-446000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19476-967/.minikube/profiles/stopped-upgrade-446000/client.key", CAFile:"/Users/jenkins/minikube-integration/19476-967/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102335610), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 04:17:30.526453    4093 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 04:17:30.529267    4093 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-446000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0819 04:17:30.529273    4093 kubeadm.go:1160] stopping kube-system containers ...
	I0819 04:17:30.529312    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0819 04:17:30.540264    4093 docker.go:483] Stopping containers: [ce491870b40f 672093e300cc b3b1f57bf431 f5cd372c916c f610b8f4a094 12cba185f1e7 6add09fad9b2 d5f6a5d583d3 a97e1971b34a]
	I0819 04:17:30.540331    4093 ssh_runner.go:195] Run: docker stop ce491870b40f 672093e300cc b3b1f57bf431 f5cd372c916c f610b8f4a094 12cba185f1e7 6add09fad9b2 d5f6a5d583d3 a97e1971b34a
	I0819 04:17:30.551450    4093 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 04:17:30.556667    4093 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 04:17:30.559880    4093 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 04:17:30.559887    4093 kubeadm.go:157] found existing configuration files:
	
	I0819 04:17:30.559908    4093 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50464 /etc/kubernetes/admin.conf
	I0819 04:17:30.562333    4093 kubeadm.go:163] "https://control-plane.minikube.internal:50464" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50464 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 04:17:30.562355    4093 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 04:17:30.565033    4093 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50464 /etc/kubernetes/kubelet.conf
	I0819 04:17:30.567827    4093 kubeadm.go:163] "https://control-plane.minikube.internal:50464" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50464 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 04:17:30.567854    4093 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 04:17:30.570440    4093 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50464 /etc/kubernetes/controller-manager.conf
	I0819 04:17:30.573042    4093 kubeadm.go:163] "https://control-plane.minikube.internal:50464" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50464 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 04:17:30.573062    4093 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 04:17:30.576097    4093 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50464 /etc/kubernetes/scheduler.conf
	I0819 04:17:30.578474    4093 kubeadm.go:163] "https://control-plane.minikube.internal:50464" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50464 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 04:17:30.578494    4093 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 04:17:30.581041    4093 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 04:17:30.583904    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 04:17:30.607165    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 04:17:31.096972    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 04:17:31.211552    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 04:17:31.238303    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 04:17:31.258495    4093 api_server.go:52] waiting for apiserver process to appear ...
	I0819 04:17:31.258585    4093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 04:17:31.760743    4093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 04:17:31.507584    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:17:31.507727    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:17:31.520591    3949 logs.go:276] 2 containers: [e6e08462a43e 82e016e3639d]
	I0819 04:17:31.520667    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:17:31.531382    3949 logs.go:276] 2 containers: [124abd52fd44 cea274700c6b]
	I0819 04:17:31.531452    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:17:31.545813    3949 logs.go:276] 1 containers: [086adbfeded2]
	I0819 04:17:31.545888    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:17:31.556352    3949 logs.go:276] 2 containers: [6362a51486fb b19a94fd47ab]
	I0819 04:17:31.556414    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:17:31.567213    3949 logs.go:276] 1 containers: [9f601f76c443]
	I0819 04:17:31.567278    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:17:31.577735    3949 logs.go:276] 2 containers: [19fa56b6b5d8 fcadb869ae9b]
	I0819 04:17:31.577809    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:17:31.587693    3949 logs.go:276] 0 containers: []
	W0819 04:17:31.587704    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:17:31.587764    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:17:31.598292    3949 logs.go:276] 2 containers: [0d999e2f9c91 f2aeab8371d3]
	I0819 04:17:31.598308    3949 logs.go:123] Gathering logs for coredns [086adbfeded2] ...
	I0819 04:17:31.598314    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086adbfeded2"
	I0819 04:17:31.612120    3949 logs.go:123] Gathering logs for kube-scheduler [b19a94fd47ab] ...
	I0819 04:17:31.612134    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b19a94fd47ab"
	I0819 04:17:31.627242    3949 logs.go:123] Gathering logs for kube-proxy [9f601f76c443] ...
	I0819 04:17:31.627254    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f601f76c443"
	I0819 04:17:31.639231    3949 logs.go:123] Gathering logs for kube-apiserver [e6e08462a43e] ...
	I0819 04:17:31.639240    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e08462a43e"
	I0819 04:17:31.657967    3949 logs.go:123] Gathering logs for etcd [124abd52fd44] ...
	I0819 04:17:31.657980    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124abd52fd44"
	I0819 04:17:31.677803    3949 logs.go:123] Gathering logs for kube-scheduler [6362a51486fb] ...
	I0819 04:17:31.677816    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6362a51486fb"
	I0819 04:17:31.689818    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:17:31.689831    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:17:31.725891    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:17:31.725901    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:17:31.730105    3949 logs.go:123] Gathering logs for kube-controller-manager [19fa56b6b5d8] ...
	I0819 04:17:31.730114    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fa56b6b5d8"
	I0819 04:17:31.753505    3949 logs.go:123] Gathering logs for storage-provisioner [f2aeab8371d3] ...
	I0819 04:17:31.753529    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2aeab8371d3"
	I0819 04:17:31.767648    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:17:31.767663    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:17:31.805583    3949 logs.go:123] Gathering logs for kube-apiserver [82e016e3639d] ...
	I0819 04:17:31.805595    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82e016e3639d"
	I0819 04:17:31.830305    3949 logs.go:123] Gathering logs for storage-provisioner [0d999e2f9c91] ...
	I0819 04:17:31.830318    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d999e2f9c91"
	I0819 04:17:31.843172    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:17:31.843184    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:17:31.867154    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:17:31.867169    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:17:31.880659    3949 logs.go:123] Gathering logs for etcd [cea274700c6b] ...
	I0819 04:17:31.880672    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cea274700c6b"
	I0819 04:17:31.900807    3949 logs.go:123] Gathering logs for kube-controller-manager [fcadb869ae9b] ...
	I0819 04:17:31.900829    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcadb869ae9b"
	I0819 04:17:34.417861    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:17:32.260629    4093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 04:17:32.266629    4093 api_server.go:72] duration metric: took 1.0081475s to wait for apiserver process to appear ...
	I0819 04:17:32.266639    4093 api_server.go:88] waiting for apiserver healthz status ...
	I0819 04:17:32.266653    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:17:39.420087    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:17:39.420222    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:17:39.436327    3949 logs.go:276] 2 containers: [e6e08462a43e 82e016e3639d]
	I0819 04:17:39.436413    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:17:39.448168    3949 logs.go:276] 2 containers: [124abd52fd44 cea274700c6b]
	I0819 04:17:39.448244    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:17:39.459054    3949 logs.go:276] 1 containers: [086adbfeded2]
	I0819 04:17:39.459130    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:17:39.470225    3949 logs.go:276] 2 containers: [6362a51486fb b19a94fd47ab]
	I0819 04:17:39.470295    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:17:39.481057    3949 logs.go:276] 1 containers: [9f601f76c443]
	I0819 04:17:39.481120    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:17:39.494210    3949 logs.go:276] 2 containers: [19fa56b6b5d8 fcadb869ae9b]
	I0819 04:17:39.494287    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:17:39.504106    3949 logs.go:276] 0 containers: []
	W0819 04:17:39.504115    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:17:39.504168    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:17:39.515020    3949 logs.go:276] 2 containers: [0d999e2f9c91 f2aeab8371d3]
	I0819 04:17:39.515036    3949 logs.go:123] Gathering logs for etcd [cea274700c6b] ...
	I0819 04:17:39.515042    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cea274700c6b"
	I0819 04:17:39.532915    3949 logs.go:123] Gathering logs for kube-scheduler [b19a94fd47ab] ...
	I0819 04:17:39.532927    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b19a94fd47ab"
	I0819 04:17:39.549300    3949 logs.go:123] Gathering logs for kube-controller-manager [19fa56b6b5d8] ...
	I0819 04:17:39.549311    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fa56b6b5d8"
	I0819 04:17:39.570608    3949 logs.go:123] Gathering logs for kube-controller-manager [fcadb869ae9b] ...
	I0819 04:17:39.570620    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcadb869ae9b"
	I0819 04:17:39.582655    3949 logs.go:123] Gathering logs for storage-provisioner [f2aeab8371d3] ...
	I0819 04:17:39.582665    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2aeab8371d3"
	I0819 04:17:39.594547    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:17:39.594559    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:17:39.599591    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:17:39.599599    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:17:39.637659    3949 logs.go:123] Gathering logs for kube-apiserver [e6e08462a43e] ...
	I0819 04:17:39.637670    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e08462a43e"
	I0819 04:17:39.652545    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:17:39.652559    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:17:39.691784    3949 logs.go:123] Gathering logs for etcd [124abd52fd44] ...
	I0819 04:17:39.691799    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124abd52fd44"
	I0819 04:17:39.705387    3949 logs.go:123] Gathering logs for kube-proxy [9f601f76c443] ...
	I0819 04:17:39.705401    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f601f76c443"
	I0819 04:17:39.716989    3949 logs.go:123] Gathering logs for storage-provisioner [0d999e2f9c91] ...
	I0819 04:17:39.717003    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d999e2f9c91"
	I0819 04:17:39.728872    3949 logs.go:123] Gathering logs for kube-apiserver [82e016e3639d] ...
	I0819 04:17:39.728885    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82e016e3639d"
	I0819 04:17:39.748706    3949 logs.go:123] Gathering logs for coredns [086adbfeded2] ...
	I0819 04:17:39.748719    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086adbfeded2"
	I0819 04:17:39.760715    3949 logs.go:123] Gathering logs for kube-scheduler [6362a51486fb] ...
	I0819 04:17:39.760728    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6362a51486fb"
	I0819 04:17:39.776524    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:17:39.776535    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:17:39.799114    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:17:39.799122    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:17:37.268725    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:17:37.268751    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:17:42.314087    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:17:42.268952    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:17:42.268994    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:17:47.316248    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:17:47.316357    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:17:47.328701    3949 logs.go:276] 2 containers: [e6e08462a43e 82e016e3639d]
	I0819 04:17:47.328783    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:17:47.339930    3949 logs.go:276] 2 containers: [124abd52fd44 cea274700c6b]
	I0819 04:17:47.340006    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:17:47.350939    3949 logs.go:276] 1 containers: [086adbfeded2]
	I0819 04:17:47.351016    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:17:47.361292    3949 logs.go:276] 2 containers: [6362a51486fb b19a94fd47ab]
	I0819 04:17:47.361368    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:17:47.372145    3949 logs.go:276] 1 containers: [9f601f76c443]
	I0819 04:17:47.372216    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:17:47.383081    3949 logs.go:276] 2 containers: [19fa56b6b5d8 fcadb869ae9b]
	I0819 04:17:47.383148    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:17:47.393388    3949 logs.go:276] 0 containers: []
	W0819 04:17:47.393399    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:17:47.393457    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:17:47.404333    3949 logs.go:276] 2 containers: [0d999e2f9c91 f2aeab8371d3]
	I0819 04:17:47.404352    3949 logs.go:123] Gathering logs for kube-controller-manager [fcadb869ae9b] ...
	I0819 04:17:47.404357    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcadb869ae9b"
	I0819 04:17:47.416193    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:17:47.416203    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:17:47.453417    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:17:47.453431    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:17:47.489440    3949 logs.go:123] Gathering logs for storage-provisioner [f2aeab8371d3] ...
	I0819 04:17:47.489453    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2aeab8371d3"
	I0819 04:17:47.505685    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:17:47.505695    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:17:47.517653    3949 logs.go:123] Gathering logs for kube-scheduler [b19a94fd47ab] ...
	I0819 04:17:47.517665    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b19a94fd47ab"
	I0819 04:17:47.533487    3949 logs.go:123] Gathering logs for storage-provisioner [0d999e2f9c91] ...
	I0819 04:17:47.533499    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d999e2f9c91"
	I0819 04:17:47.546090    3949 logs.go:123] Gathering logs for etcd [cea274700c6b] ...
	I0819 04:17:47.546102    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cea274700c6b"
	I0819 04:17:47.564289    3949 logs.go:123] Gathering logs for coredns [086adbfeded2] ...
	I0819 04:17:47.564299    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086adbfeded2"
	I0819 04:17:47.575862    3949 logs.go:123] Gathering logs for kube-proxy [9f601f76c443] ...
	I0819 04:17:47.575875    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f601f76c443"
	I0819 04:17:47.591269    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:17:47.591282    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:17:47.613451    3949 logs.go:123] Gathering logs for kube-apiserver [82e016e3639d] ...
	I0819 04:17:47.613460    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82e016e3639d"
	I0819 04:17:47.641481    3949 logs.go:123] Gathering logs for etcd [124abd52fd44] ...
	I0819 04:17:47.641494    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124abd52fd44"
	I0819 04:17:47.656404    3949 logs.go:123] Gathering logs for kube-scheduler [6362a51486fb] ...
	I0819 04:17:47.656418    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6362a51486fb"
	I0819 04:17:47.669086    3949 logs.go:123] Gathering logs for kube-controller-manager [19fa56b6b5d8] ...
	I0819 04:17:47.669097    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fa56b6b5d8"
	I0819 04:17:47.686860    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:17:47.686869    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:17:47.691328    3949 logs.go:123] Gathering logs for kube-apiserver [e6e08462a43e] ...
	I0819 04:17:47.691338    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e08462a43e"
	I0819 04:17:47.269365    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:17:47.269386    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:17:50.207234    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:17:52.269746    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:17:52.269767    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:17:55.209438    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:17:55.209591    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:17:55.221565    3949 logs.go:276] 2 containers: [e6e08462a43e 82e016e3639d]
	I0819 04:17:55.221657    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:17:55.237258    3949 logs.go:276] 2 containers: [124abd52fd44 cea274700c6b]
	I0819 04:17:55.237333    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:17:55.248204    3949 logs.go:276] 1 containers: [086adbfeded2]
	I0819 04:17:55.248269    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:17:55.259397    3949 logs.go:276] 2 containers: [6362a51486fb b19a94fd47ab]
	I0819 04:17:55.259468    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:17:55.270949    3949 logs.go:276] 1 containers: [9f601f76c443]
	I0819 04:17:55.271013    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:17:55.281921    3949 logs.go:276] 2 containers: [19fa56b6b5d8 fcadb869ae9b]
	I0819 04:17:55.281987    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:17:55.299649    3949 logs.go:276] 0 containers: []
	W0819 04:17:55.299659    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:17:55.299725    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:17:55.310467    3949 logs.go:276] 2 containers: [0d999e2f9c91 f2aeab8371d3]
	I0819 04:17:55.310486    3949 logs.go:123] Gathering logs for kube-controller-manager [19fa56b6b5d8] ...
	I0819 04:17:55.310491    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fa56b6b5d8"
	I0819 04:17:55.328410    3949 logs.go:123] Gathering logs for kube-controller-manager [fcadb869ae9b] ...
	I0819 04:17:55.328420    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcadb869ae9b"
	I0819 04:17:55.342155    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:17:55.342166    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:17:55.378202    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:17:55.378211    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:17:55.413204    3949 logs.go:123] Gathering logs for kube-scheduler [6362a51486fb] ...
	I0819 04:17:55.413216    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6362a51486fb"
	I0819 04:17:55.425726    3949 logs.go:123] Gathering logs for kube-proxy [9f601f76c443] ...
	I0819 04:17:55.425738    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f601f76c443"
	I0819 04:17:55.439269    3949 logs.go:123] Gathering logs for coredns [086adbfeded2] ...
	I0819 04:17:55.439283    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086adbfeded2"
	I0819 04:17:55.452988    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:17:55.453005    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:17:55.465138    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:17:55.465153    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:17:55.470435    3949 logs.go:123] Gathering logs for kube-apiserver [82e016e3639d] ...
	I0819 04:17:55.470442    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82e016e3639d"
	I0819 04:17:55.490174    3949 logs.go:123] Gathering logs for etcd [124abd52fd44] ...
	I0819 04:17:55.490185    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124abd52fd44"
	I0819 04:17:55.504080    3949 logs.go:123] Gathering logs for etcd [cea274700c6b] ...
	I0819 04:17:55.504095    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cea274700c6b"
	I0819 04:17:55.522523    3949 logs.go:123] Gathering logs for kube-apiserver [e6e08462a43e] ...
	I0819 04:17:55.522534    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e08462a43e"
	I0819 04:17:55.536838    3949 logs.go:123] Gathering logs for storage-provisioner [f2aeab8371d3] ...
	I0819 04:17:55.536848    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2aeab8371d3"
	I0819 04:17:55.548045    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:17:55.548056    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:17:55.571235    3949 logs.go:123] Gathering logs for kube-scheduler [b19a94fd47ab] ...
	I0819 04:17:55.571258    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b19a94fd47ab"
	I0819 04:17:55.589076    3949 logs.go:123] Gathering logs for storage-provisioner [0d999e2f9c91] ...
	I0819 04:17:55.589087    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d999e2f9c91"
	I0819 04:17:58.102655    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:17:57.270283    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:17:57.270331    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:18:03.105373    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:18:03.105507    3949 kubeadm.go:597] duration metric: took 4m4.063838917s to restartPrimaryControlPlane
	W0819 04:18:03.105624    3949 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 04:18:03.105683    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0819 04:18:04.106308    3949 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.000623875s)
	I0819 04:18:04.106379    3949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 04:18:04.111546    3949 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 04:18:04.114480    3949 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 04:18:04.117422    3949 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 04:18:04.117428    3949 kubeadm.go:157] found existing configuration files:
	
	I0819 04:18:04.117453    3949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50264 /etc/kubernetes/admin.conf
	I0819 04:18:04.119944    3949 kubeadm.go:163] "https://control-plane.minikube.internal:50264" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50264 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 04:18:04.119969    3949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 04:18:04.122721    3949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50264 /etc/kubernetes/kubelet.conf
	I0819 04:18:04.125970    3949 kubeadm.go:163] "https://control-plane.minikube.internal:50264" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50264 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 04:18:04.125998    3949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 04:18:04.129031    3949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50264 /etc/kubernetes/controller-manager.conf
	I0819 04:18:04.131597    3949 kubeadm.go:163] "https://control-plane.minikube.internal:50264" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50264 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 04:18:04.131622    3949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 04:18:04.134586    3949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50264 /etc/kubernetes/scheduler.conf
	I0819 04:18:04.137788    3949 kubeadm.go:163] "https://control-plane.minikube.internal:50264" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50264 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 04:18:04.137817    3949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 04:18:04.140976    3949 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 04:18:04.159295    3949 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0819 04:18:04.159395    3949 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 04:18:04.208932    3949 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 04:18:04.208986    3949 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 04:18:04.209035    3949 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 04:18:04.263218    3949 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 04:18:04.267168    3949 out.go:235]   - Generating certificates and keys ...
	I0819 04:18:04.267203    3949 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 04:18:04.267234    3949 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 04:18:04.267277    3949 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 04:18:04.267311    3949 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 04:18:04.267344    3949 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 04:18:04.267375    3949 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 04:18:04.267406    3949 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 04:18:04.267434    3949 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 04:18:04.267505    3949 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 04:18:04.267572    3949 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 04:18:04.267593    3949 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 04:18:04.267629    3949 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 04:18:04.352173    3949 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 04:18:04.532900    3949 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 04:18:04.617771    3949 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 04:18:04.789926    3949 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 04:18:04.818290    3949 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 04:18:04.818641    3949 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 04:18:04.818783    3949 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 04:18:04.905432    3949 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 04:18:04.909314    3949 out.go:235]   - Booting up control plane ...
	I0819 04:18:04.909366    3949 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 04:18:04.909404    3949 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 04:18:04.914810    3949 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 04:18:04.915056    3949 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 04:18:04.915814    3949 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 04:18:02.270974    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:18:02.271021    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:18:08.917337    3949 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.001264 seconds
	I0819 04:18:08.917570    3949 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 04:18:08.921499    3949 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 04:18:09.434875    3949 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 04:18:09.435131    3949 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-079000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 04:18:09.941123    3949 kubeadm.go:310] [bootstrap-token] Using token: ronyev.g7zknjg3pm347ihg
	I0819 04:18:09.945112    3949 out.go:235]   - Configuring RBAC rules ...
	I0819 04:18:09.945177    3949 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 04:18:09.945231    3949 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 04:18:09.949042    3949 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 04:18:09.949968    3949 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 04:18:09.951111    3949 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 04:18:09.952361    3949 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 04:18:09.955924    3949 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 04:18:10.133705    3949 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 04:18:10.346235    3949 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 04:18:10.346247    3949 kubeadm.go:310] 
	I0819 04:18:10.346288    3949 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 04:18:10.346291    3949 kubeadm.go:310] 
	I0819 04:18:10.346326    3949 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 04:18:10.346330    3949 kubeadm.go:310] 
	I0819 04:18:10.346341    3949 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 04:18:10.346368    3949 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 04:18:10.346406    3949 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 04:18:10.346413    3949 kubeadm.go:310] 
	I0819 04:18:10.346448    3949 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 04:18:10.346452    3949 kubeadm.go:310] 
	I0819 04:18:10.346471    3949 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 04:18:10.346474    3949 kubeadm.go:310] 
	I0819 04:18:10.346498    3949 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 04:18:10.346542    3949 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 04:18:10.346587    3949 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 04:18:10.346594    3949 kubeadm.go:310] 
	I0819 04:18:10.346630    3949 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 04:18:10.346672    3949 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 04:18:10.346678    3949 kubeadm.go:310] 
	I0819 04:18:10.346729    3949 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ronyev.g7zknjg3pm347ihg \
	I0819 04:18:10.346779    3949 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:200cf9aaf4d8090b061170c9280858f68184aa10356c82792dd3b43229196e5e \
	I0819 04:18:10.346789    3949 kubeadm.go:310] 	--control-plane 
	I0819 04:18:10.346791    3949 kubeadm.go:310] 
	I0819 04:18:10.346843    3949 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 04:18:10.346851    3949 kubeadm.go:310] 
	I0819 04:18:10.346887    3949 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ronyev.g7zknjg3pm347ihg \
	I0819 04:18:10.346933    3949 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:200cf9aaf4d8090b061170c9280858f68184aa10356c82792dd3b43229196e5e 
	I0819 04:18:10.346995    3949 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 04:18:10.347003    3949 cni.go:84] Creating CNI manager for ""
	I0819 04:18:10.347010    3949 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:18:10.355178    3949 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 04:18:10.358381    3949 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 04:18:10.361485    3949 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 04:18:10.368116    3949 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 04:18:10.368170    3949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 04:18:10.368170    3949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-079000 minikube.k8s.io/updated_at=2024_08_19T04_18_10_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=7871dd89d2a8218fd3bbcc542b116f963c0d9934 minikube.k8s.io/name=running-upgrade-079000 minikube.k8s.io/primary=true
	I0819 04:18:10.408691    3949 ops.go:34] apiserver oom_adj: -16
	I0819 04:18:10.408782    3949 kubeadm.go:1113] duration metric: took 40.662916ms to wait for elevateKubeSystemPrivileges
	I0819 04:18:10.408795    3949 kubeadm.go:394] duration metric: took 4m11.380638375s to StartCluster
	I0819 04:18:10.408804    3949 settings.go:142] acquiring lock: {Name:mkadddaa5ec690138051e9a9334213fba69e0867 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:18:10.408888    3949 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19476-967/kubeconfig
	I0819 04:18:10.409276    3949 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19476-967/kubeconfig: {Name:mkcc8b27cbda2ef567c4911aa335c1e1951a7d2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:18:10.409480    3949 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:18:10.409532    3949 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 04:18:10.409573    3949 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-079000"
	I0819 04:18:10.409585    3949 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-079000"
	W0819 04:18:10.409591    3949 addons.go:243] addon storage-provisioner should already be in state true
	I0819 04:18:10.409601    3949 host.go:66] Checking if "running-upgrade-079000" exists ...
	I0819 04:18:10.409592    3949 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-079000"
	I0819 04:18:10.409630    3949 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-079000"
	I0819 04:18:10.409735    3949 config.go:182] Loaded profile config "running-upgrade-079000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 04:18:10.409862    3949 retry.go:31] will retry after 1.413875858s: connect: dial unix /Users/jenkins/minikube-integration/19476-967/.minikube/machines/running-upgrade-079000/monitor: connect: connection refused
	I0819 04:18:10.410654    3949 kapi.go:59] client config for running-upgrade-079000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19476-967/.minikube/profiles/running-upgrade-079000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19476-967/.minikube/profiles/running-upgrade-079000/client.key", CAFile:"/Users/jenkins/minikube-integration/19476-967/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106391610), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 04:18:10.410775    3949 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-079000"
	W0819 04:18:10.410779    3949 addons.go:243] addon default-storageclass should already be in state true
	I0819 04:18:10.410785    3949 host.go:66] Checking if "running-upgrade-079000" exists ...
	I0819 04:18:10.411295    3949 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 04:18:10.411299    3949 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 04:18:10.411304    3949 sshutil.go:53] new ssh client: &{IP:localhost Port:50232 SSHKeyPath:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/running-upgrade-079000/id_rsa Username:docker}
	I0819 04:18:10.412424    3949 out.go:177] * Verifying Kubernetes components...
	I0819 04:18:07.272360    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:18:07.272387    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:18:10.417189    3949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 04:18:10.512246    3949 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 04:18:10.518389    3949 api_server.go:52] waiting for apiserver process to appear ...
	I0819 04:18:10.518438    3949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 04:18:10.523271    3949 api_server.go:72] duration metric: took 113.7805ms to wait for apiserver process to appear ...
	I0819 04:18:10.523281    3949 api_server.go:88] waiting for apiserver healthz status ...
	I0819 04:18:10.523291    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:18:10.598191    3949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 04:18:10.898418    3949 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0819 04:18:10.898431    3949 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0819 04:18:11.831721    3949 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 04:18:11.836798    3949 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 04:18:11.836808    3949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 04:18:11.836821    3949 sshutil.go:53] new ssh client: &{IP:localhost Port:50232 SSHKeyPath:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/running-upgrade-079000/id_rsa Username:docker}
	I0819 04:18:11.868094    3949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 04:18:12.273585    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:18:12.273606    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:18:15.524420    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:18:15.524456    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:18:17.274050    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:18:17.274137    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:18:20.525254    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:18:20.525316    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:18:22.276391    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:18:22.276425    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:18:25.525526    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:18:25.525548    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:18:27.278576    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:18:27.278602    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:18:30.525756    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:18:30.525789    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:18:32.280770    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:18:32.280947    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:18:32.292778    4093 logs.go:276] 2 containers: [857a1390fd04 b3b1f57bf431]
	I0819 04:18:32.292857    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:18:32.304126    4093 logs.go:276] 2 containers: [be42f13859d1 672093e300cc]
	I0819 04:18:32.304202    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:18:32.314991    4093 logs.go:276] 1 containers: [7bd1561a8a6f]
	I0819 04:18:32.315051    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:18:32.325980    4093 logs.go:276] 2 containers: [d95ed659ab7f 6add09fad9b2]
	I0819 04:18:32.326065    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:18:32.336613    4093 logs.go:276] 1 containers: [bc99c20c6575]
	I0819 04:18:32.336684    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:18:32.346922    4093 logs.go:276] 2 containers: [c08aada44f32 ce491870b40f]
	I0819 04:18:32.346998    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:18:32.366953    4093 logs.go:276] 0 containers: []
	W0819 04:18:32.366964    4093 logs.go:278] No container was found matching "kindnet"
	I0819 04:18:32.367022    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:18:32.378966    4093 logs.go:276] 2 containers: [3e4479afe33e 343dec6784e0]
	I0819 04:18:32.378986    4093 logs.go:123] Gathering logs for etcd [672093e300cc] ...
	I0819 04:18:32.378992    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 672093e300cc"
	I0819 04:18:32.393609    4093 logs.go:123] Gathering logs for kube-controller-manager [c08aada44f32] ...
	I0819 04:18:32.393620    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08aada44f32"
	I0819 04:18:32.410720    4093 logs.go:123] Gathering logs for storage-provisioner [3e4479afe33e] ...
	I0819 04:18:32.410731    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e4479afe33e"
	I0819 04:18:32.426516    4093 logs.go:123] Gathering logs for Docker ...
	I0819 04:18:32.426529    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:18:32.453102    4093 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:18:32.453109    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:18:32.532657    4093 logs.go:123] Gathering logs for etcd [be42f13859d1] ...
	I0819 04:18:32.532668    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be42f13859d1"
	I0819 04:18:32.547021    4093 logs.go:123] Gathering logs for kube-controller-manager [ce491870b40f] ...
	I0819 04:18:32.547033    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce491870b40f"
	I0819 04:18:32.559374    4093 logs.go:123] Gathering logs for storage-provisioner [343dec6784e0] ...
	I0819 04:18:32.559384    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 343dec6784e0"
	I0819 04:18:32.570607    4093 logs.go:123] Gathering logs for kubelet ...
	I0819 04:18:32.570617    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:18:32.607262    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:18:32.607357    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:18:32.607933    4093 logs.go:123] Gathering logs for dmesg ...
	I0819 04:18:32.607938    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:18:32.612211    4093 logs.go:123] Gathering logs for kube-scheduler [6add09fad9b2] ...
	I0819 04:18:32.612220    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6add09fad9b2"
	I0819 04:18:32.635153    4093 logs.go:123] Gathering logs for kube-apiserver [857a1390fd04] ...
	I0819 04:18:32.635167    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857a1390fd04"
	I0819 04:18:32.649919    4093 logs.go:123] Gathering logs for kube-apiserver [b3b1f57bf431] ...
	I0819 04:18:32.649929    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3b1f57bf431"
	I0819 04:18:32.693640    4093 logs.go:123] Gathering logs for coredns [7bd1561a8a6f] ...
	I0819 04:18:32.693652    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bd1561a8a6f"
	I0819 04:18:32.709018    4093 logs.go:123] Gathering logs for kube-scheduler [d95ed659ab7f] ...
	I0819 04:18:32.709032    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d95ed659ab7f"
	I0819 04:18:32.720762    4093 logs.go:123] Gathering logs for kube-proxy [bc99c20c6575] ...
	I0819 04:18:32.720778    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc99c20c6575"
	I0819 04:18:32.733314    4093 logs.go:123] Gathering logs for container status ...
	I0819 04:18:32.733329    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:18:32.745367    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:18:32.745378    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:18:32.745406    4093 out.go:270] X Problems detected in kubelet:
	W0819 04:18:32.745410    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:18:32.745414    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:18:32.745419    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:18:32.745422    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:18:35.526196    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:18:35.526217    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:18:40.526654    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:18:40.526705    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0819 04:18:40.900453    3949 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0819 04:18:40.904773    3949 out.go:177] * Enabled addons: storage-provisioner
	I0819 04:18:40.913620    3949 addons.go:510] duration metric: took 30.504467333s for enable addons: enabled=[storage-provisioner]
	I0819 04:18:42.749522    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:18:45.527456    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:18:45.527494    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:18:47.751902    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:18:47.752149    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:18:47.776468    4093 logs.go:276] 2 containers: [857a1390fd04 b3b1f57bf431]
	I0819 04:18:47.776556    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:18:47.790711    4093 logs.go:276] 2 containers: [be42f13859d1 672093e300cc]
	I0819 04:18:47.790793    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:18:47.804641    4093 logs.go:276] 1 containers: [7bd1561a8a6f]
	I0819 04:18:47.804714    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:18:47.818541    4093 logs.go:276] 2 containers: [d95ed659ab7f 6add09fad9b2]
	I0819 04:18:47.818608    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:18:47.829723    4093 logs.go:276] 1 containers: [bc99c20c6575]
	I0819 04:18:47.829796    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:18:47.840575    4093 logs.go:276] 2 containers: [c08aada44f32 ce491870b40f]
	I0819 04:18:47.840646    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:18:47.851149    4093 logs.go:276] 0 containers: []
	W0819 04:18:47.851162    4093 logs.go:278] No container was found matching "kindnet"
	I0819 04:18:47.851227    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:18:47.860941    4093 logs.go:276] 2 containers: [3e4479afe33e 343dec6784e0]
	I0819 04:18:47.860961    4093 logs.go:123] Gathering logs for coredns [7bd1561a8a6f] ...
	I0819 04:18:47.860966    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bd1561a8a6f"
	I0819 04:18:47.872434    4093 logs.go:123] Gathering logs for kube-scheduler [6add09fad9b2] ...
	I0819 04:18:47.872446    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6add09fad9b2"
	I0819 04:18:47.893367    4093 logs.go:123] Gathering logs for kube-proxy [bc99c20c6575] ...
	I0819 04:18:47.893377    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc99c20c6575"
	I0819 04:18:47.904938    4093 logs.go:123] Gathering logs for kubelet ...
	I0819 04:18:47.904947    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:18:47.943731    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:18:47.943823    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:18:47.944392    4093 logs.go:123] Gathering logs for dmesg ...
	I0819 04:18:47.944398    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:18:47.948492    4093 logs.go:123] Gathering logs for etcd [672093e300cc] ...
	I0819 04:18:47.948499    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 672093e300cc"
	I0819 04:18:47.974199    4093 logs.go:123] Gathering logs for storage-provisioner [3e4479afe33e] ...
	I0819 04:18:47.974210    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e4479afe33e"
	I0819 04:18:47.985526    4093 logs.go:123] Gathering logs for container status ...
	I0819 04:18:47.985536    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:18:47.999306    4093 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:18:47.999319    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:18:48.036186    4093 logs.go:123] Gathering logs for kube-apiserver [857a1390fd04] ...
	I0819 04:18:48.036197    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857a1390fd04"
	I0819 04:18:48.051969    4093 logs.go:123] Gathering logs for kube-apiserver [b3b1f57bf431] ...
	I0819 04:18:48.051979    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3b1f57bf431"
	I0819 04:18:48.088551    4093 logs.go:123] Gathering logs for kube-scheduler [d95ed659ab7f] ...
	I0819 04:18:48.088561    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d95ed659ab7f"
	I0819 04:18:48.100090    4093 logs.go:123] Gathering logs for storage-provisioner [343dec6784e0] ...
	I0819 04:18:48.100101    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 343dec6784e0"
	I0819 04:18:48.111018    4093 logs.go:123] Gathering logs for Docker ...
	I0819 04:18:48.111028    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:18:48.134806    4093 logs.go:123] Gathering logs for etcd [be42f13859d1] ...
	I0819 04:18:48.134814    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be42f13859d1"
	I0819 04:18:48.148317    4093 logs.go:123] Gathering logs for kube-controller-manager [c08aada44f32] ...
	I0819 04:18:48.148327    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08aada44f32"
	I0819 04:18:48.166121    4093 logs.go:123] Gathering logs for kube-controller-manager [ce491870b40f] ...
	I0819 04:18:48.166132    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce491870b40f"
	I0819 04:18:48.178291    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:18:48.178301    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:18:48.178326    4093 out.go:270] X Problems detected in kubelet:
	W0819 04:18:48.178332    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:18:48.178335    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:18:48.178338    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:18:48.178341    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:18:50.528419    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:18:50.528516    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:18:55.530227    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:18:55.530251    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:18:58.182328    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:19:00.531756    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:19:00.531778    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:19:03.184667    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:19:03.184952    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:19:03.210439    4093 logs.go:276] 2 containers: [857a1390fd04 b3b1f57bf431]
	I0819 04:19:03.210551    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:19:03.226683    4093 logs.go:276] 2 containers: [be42f13859d1 672093e300cc]
	I0819 04:19:03.226767    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:19:03.240114    4093 logs.go:276] 1 containers: [7bd1561a8a6f]
	I0819 04:19:03.240193    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:19:03.251339    4093 logs.go:276] 2 containers: [d95ed659ab7f 6add09fad9b2]
	I0819 04:19:03.251414    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:19:03.261398    4093 logs.go:276] 1 containers: [bc99c20c6575]
	I0819 04:19:03.261467    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:19:03.275234    4093 logs.go:276] 2 containers: [c08aada44f32 ce491870b40f]
	I0819 04:19:03.275307    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:19:03.287119    4093 logs.go:276] 0 containers: []
	W0819 04:19:03.287134    4093 logs.go:278] No container was found matching "kindnet"
	I0819 04:19:03.287195    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:19:03.302690    4093 logs.go:276] 2 containers: [3e4479afe33e 343dec6784e0]
	I0819 04:19:03.302712    4093 logs.go:123] Gathering logs for kube-apiserver [857a1390fd04] ...
	I0819 04:19:03.302718    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857a1390fd04"
	I0819 04:19:03.316398    4093 logs.go:123] Gathering logs for kube-apiserver [b3b1f57bf431] ...
	I0819 04:19:03.316408    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3b1f57bf431"
	I0819 04:19:03.354008    4093 logs.go:123] Gathering logs for etcd [be42f13859d1] ...
	I0819 04:19:03.354021    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be42f13859d1"
	I0819 04:19:03.375292    4093 logs.go:123] Gathering logs for storage-provisioner [3e4479afe33e] ...
	I0819 04:19:03.375304    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e4479afe33e"
	I0819 04:19:03.391160    4093 logs.go:123] Gathering logs for Docker ...
	I0819 04:19:03.391171    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:19:03.414326    4093 logs.go:123] Gathering logs for container status ...
	I0819 04:19:03.414335    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:19:03.425998    4093 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:19:03.426008    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:19:03.462595    4093 logs.go:123] Gathering logs for etcd [672093e300cc] ...
	I0819 04:19:03.462608    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 672093e300cc"
	I0819 04:19:03.480539    4093 logs.go:123] Gathering logs for kube-scheduler [d95ed659ab7f] ...
	I0819 04:19:03.480551    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d95ed659ab7f"
	I0819 04:19:03.492613    4093 logs.go:123] Gathering logs for kube-controller-manager [c08aada44f32] ...
	I0819 04:19:03.492625    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08aada44f32"
	I0819 04:19:03.509800    4093 logs.go:123] Gathering logs for kube-controller-manager [ce491870b40f] ...
	I0819 04:19:03.509812    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce491870b40f"
	I0819 04:19:03.522081    4093 logs.go:123] Gathering logs for kubelet ...
	I0819 04:19:03.522091    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:19:03.557885    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:19:03.557977    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:19:03.558563    4093 logs.go:123] Gathering logs for coredns [7bd1561a8a6f] ...
	I0819 04:19:03.558568    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bd1561a8a6f"
	I0819 04:19:03.570334    4093 logs.go:123] Gathering logs for kube-scheduler [6add09fad9b2] ...
	I0819 04:19:03.570346    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6add09fad9b2"
	I0819 04:19:03.592291    4093 logs.go:123] Gathering logs for kube-proxy [bc99c20c6575] ...
	I0819 04:19:03.592302    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc99c20c6575"
	I0819 04:19:03.604294    4093 logs.go:123] Gathering logs for storage-provisioner [343dec6784e0] ...
	I0819 04:19:03.604304    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 343dec6784e0"
	I0819 04:19:03.614962    4093 logs.go:123] Gathering logs for dmesg ...
	I0819 04:19:03.614974    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:19:03.619005    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:19:03.619015    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:19:03.619043    4093 out.go:270] X Problems detected in kubelet:
	W0819 04:19:03.619048    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:19:03.619051    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:19:03.619055    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:19:03.619059    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:19:05.533844    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:19:05.533883    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:19:10.536066    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:19:10.536193    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:19:10.548012    3949 logs.go:276] 1 containers: [a0805f9c4c2c]
	I0819 04:19:10.548087    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:19:10.558455    3949 logs.go:276] 1 containers: [8b26c07e9e7f]
	I0819 04:19:10.558523    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:19:10.569149    3949 logs.go:276] 2 containers: [161fcc2cac7e 781c45adfd16]
	I0819 04:19:10.569216    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:19:10.579007    3949 logs.go:276] 1 containers: [ae35457314f6]
	I0819 04:19:10.579095    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:19:10.589855    3949 logs.go:276] 1 containers: [6268fe998982]
	I0819 04:19:10.589941    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:19:10.600276    3949 logs.go:276] 1 containers: [0e2a041f6a1c]
	I0819 04:19:10.600344    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:19:10.611623    3949 logs.go:276] 0 containers: []
	W0819 04:19:10.611642    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:19:10.611715    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:19:10.622088    3949 logs.go:276] 1 containers: [ce9e3ca02329]
	I0819 04:19:10.622104    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:19:10.622110    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:19:10.626770    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:19:10.626780    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:19:10.664019    3949 logs.go:123] Gathering logs for etcd [8b26c07e9e7f] ...
	I0819 04:19:10.664030    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b26c07e9e7f"
	I0819 04:19:10.678456    3949 logs.go:123] Gathering logs for coredns [161fcc2cac7e] ...
	I0819 04:19:10.678472    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161fcc2cac7e"
	I0819 04:19:10.690299    3949 logs.go:123] Gathering logs for storage-provisioner [ce9e3ca02329] ...
	I0819 04:19:10.690312    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9e3ca02329"
	I0819 04:19:10.702833    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:19:10.702846    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:19:10.728453    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:19:10.728465    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:19:10.740182    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:19:10.740194    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:19:10.777735    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:19:10.777834    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:19:10.778370    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:19:10.778457    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	I0819 04:19:10.779720    3949 logs.go:123] Gathering logs for kube-apiserver [a0805f9c4c2c] ...
	I0819 04:19:10.779731    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0805f9c4c2c"
	I0819 04:19:10.798013    3949 logs.go:123] Gathering logs for coredns [781c45adfd16] ...
	I0819 04:19:10.798025    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 781c45adfd16"
	I0819 04:19:10.810245    3949 logs.go:123] Gathering logs for kube-scheduler [ae35457314f6] ...
	I0819 04:19:10.810259    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae35457314f6"
	I0819 04:19:10.824921    3949 logs.go:123] Gathering logs for kube-proxy [6268fe998982] ...
	I0819 04:19:10.824932    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6268fe998982"
	I0819 04:19:10.836725    3949 logs.go:123] Gathering logs for kube-controller-manager [0e2a041f6a1c] ...
	I0819 04:19:10.836736    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2a041f6a1c"
	I0819 04:19:10.854520    3949 out.go:358] Setting ErrFile to fd 2...
	I0819 04:19:10.854529    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:19:10.854571    3949 out.go:270] X Problems detected in kubelet:
	W0819 04:19:10.854576    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:19:10.854582    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:19:10.854587    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:19:10.854591    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	I0819 04:19:10.854594    3949 out.go:358] Setting ErrFile to fd 2...
	I0819 04:19:10.854597    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:19:13.623089    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:19:18.625438    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:19:18.625678    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:19:18.650953    4093 logs.go:276] 2 containers: [857a1390fd04 b3b1f57bf431]
	I0819 04:19:18.651041    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:19:18.662990    4093 logs.go:276] 2 containers: [be42f13859d1 672093e300cc]
	I0819 04:19:18.663071    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:19:18.673783    4093 logs.go:276] 1 containers: [7bd1561a8a6f]
	I0819 04:19:18.673856    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:19:18.683561    4093 logs.go:276] 2 containers: [d95ed659ab7f 6add09fad9b2]
	I0819 04:19:18.683630    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:19:18.693845    4093 logs.go:276] 1 containers: [bc99c20c6575]
	I0819 04:19:18.693915    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:19:18.704870    4093 logs.go:276] 2 containers: [c08aada44f32 ce491870b40f]
	I0819 04:19:18.704943    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:19:18.715006    4093 logs.go:276] 0 containers: []
	W0819 04:19:18.715022    4093 logs.go:278] No container was found matching "kindnet"
	I0819 04:19:18.715081    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:19:18.726070    4093 logs.go:276] 2 containers: [3e4479afe33e 343dec6784e0]
	I0819 04:19:18.726086    4093 logs.go:123] Gathering logs for etcd [672093e300cc] ...
	I0819 04:19:18.726091    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 672093e300cc"
	I0819 04:19:18.740650    4093 logs.go:123] Gathering logs for kube-scheduler [d95ed659ab7f] ...
	I0819 04:19:18.740664    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d95ed659ab7f"
	I0819 04:19:18.752936    4093 logs.go:123] Gathering logs for kube-scheduler [6add09fad9b2] ...
	I0819 04:19:18.752948    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6add09fad9b2"
	I0819 04:19:18.774349    4093 logs.go:123] Gathering logs for container status ...
	I0819 04:19:18.774362    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:19:18.786093    4093 logs.go:123] Gathering logs for dmesg ...
	I0819 04:19:18.786107    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:19:18.790315    4093 logs.go:123] Gathering logs for kube-apiserver [857a1390fd04] ...
	I0819 04:19:18.790322    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857a1390fd04"
	I0819 04:19:18.808917    4093 logs.go:123] Gathering logs for etcd [be42f13859d1] ...
	I0819 04:19:18.808929    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be42f13859d1"
	I0819 04:19:18.823306    4093 logs.go:123] Gathering logs for kubelet ...
	I0819 04:19:18.823318    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:19:18.861545    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:19:18.861638    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:19:18.862243    4093 logs.go:123] Gathering logs for kube-apiserver [b3b1f57bf431] ...
	I0819 04:19:18.862253    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3b1f57bf431"
	I0819 04:19:18.899580    4093 logs.go:123] Gathering logs for kube-proxy [bc99c20c6575] ...
	I0819 04:19:18.899593    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc99c20c6575"
	I0819 04:19:18.911250    4093 logs.go:123] Gathering logs for Docker ...
	I0819 04:19:18.911260    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:19:18.935032    4093 logs.go:123] Gathering logs for coredns [7bd1561a8a6f] ...
	I0819 04:19:18.935041    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bd1561a8a6f"
	I0819 04:19:18.946520    4093 logs.go:123] Gathering logs for kube-controller-manager [ce491870b40f] ...
	I0819 04:19:18.946532    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce491870b40f"
	I0819 04:19:18.959434    4093 logs.go:123] Gathering logs for storage-provisioner [343dec6784e0] ...
	I0819 04:19:18.959443    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 343dec6784e0"
	I0819 04:19:18.970559    4093 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:19:18.970573    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:19:19.006262    4093 logs.go:123] Gathering logs for kube-controller-manager [c08aada44f32] ...
	I0819 04:19:19.006277    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08aada44f32"
	I0819 04:19:19.024064    4093 logs.go:123] Gathering logs for storage-provisioner [3e4479afe33e] ...
	I0819 04:19:19.024082    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e4479afe33e"
	I0819 04:19:19.043613    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:19:19.043622    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:19:19.043652    4093 out.go:270] X Problems detected in kubelet:
	W0819 04:19:19.043657    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:19:19.043661    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:19:19.043667    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:19:19.043670    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:19:20.858620    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:19:25.860846    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:19:25.861031    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:19:25.873047    3949 logs.go:276] 1 containers: [a0805f9c4c2c]
	I0819 04:19:25.873129    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:19:25.883742    3949 logs.go:276] 1 containers: [8b26c07e9e7f]
	I0819 04:19:25.883826    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:19:25.894415    3949 logs.go:276] 2 containers: [161fcc2cac7e 781c45adfd16]
	I0819 04:19:25.894480    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:19:25.904919    3949 logs.go:276] 1 containers: [ae35457314f6]
	I0819 04:19:25.904991    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:19:25.915191    3949 logs.go:276] 1 containers: [6268fe998982]
	I0819 04:19:25.915265    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:19:25.925766    3949 logs.go:276] 1 containers: [0e2a041f6a1c]
	I0819 04:19:25.925839    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:19:25.936115    3949 logs.go:276] 0 containers: []
	W0819 04:19:25.936126    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:19:25.936189    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:19:25.946194    3949 logs.go:276] 1 containers: [ce9e3ca02329]
	I0819 04:19:25.946213    3949 logs.go:123] Gathering logs for storage-provisioner [ce9e3ca02329] ...
	I0819 04:19:25.946218    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9e3ca02329"
	I0819 04:19:25.957821    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:19:25.957833    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:19:25.981572    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:19:25.981580    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:19:26.016301    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:19:26.016395    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:19:26.016943    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:19:26.017031    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	I0819 04:19:26.018301    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:19:26.018308    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:19:26.022579    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:19:26.022587    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:19:26.060353    3949 logs.go:123] Gathering logs for kube-apiserver [a0805f9c4c2c] ...
	I0819 04:19:26.060362    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0805f9c4c2c"
	I0819 04:19:26.076346    3949 logs.go:123] Gathering logs for etcd [8b26c07e9e7f] ...
	I0819 04:19:26.076357    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b26c07e9e7f"
	I0819 04:19:26.092809    3949 logs.go:123] Gathering logs for kube-proxy [6268fe998982] ...
	I0819 04:19:26.092820    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6268fe998982"
	I0819 04:19:26.104789    3949 logs.go:123] Gathering logs for coredns [161fcc2cac7e] ...
	I0819 04:19:26.104800    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161fcc2cac7e"
	I0819 04:19:26.116861    3949 logs.go:123] Gathering logs for coredns [781c45adfd16] ...
	I0819 04:19:26.116874    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 781c45adfd16"
	I0819 04:19:26.128675    3949 logs.go:123] Gathering logs for kube-scheduler [ae35457314f6] ...
	I0819 04:19:26.128685    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae35457314f6"
	I0819 04:19:26.148439    3949 logs.go:123] Gathering logs for kube-controller-manager [0e2a041f6a1c] ...
	I0819 04:19:26.148450    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2a041f6a1c"
	I0819 04:19:26.166304    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:19:26.166314    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:19:26.178100    3949 out.go:358] Setting ErrFile to fd 2...
	I0819 04:19:26.178114    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:19:26.178141    3949 out.go:270] X Problems detected in kubelet:
	W0819 04:19:26.178145    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:19:26.178148    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:19:26.178152    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:19:26.178155    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	I0819 04:19:26.178163    3949 out.go:358] Setting ErrFile to fd 2...
	I0819 04:19:26.178179    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:19:29.047768    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:19:34.050120    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:19:34.050370    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:19:34.073332    4093 logs.go:276] 2 containers: [857a1390fd04 b3b1f57bf431]
	I0819 04:19:34.073437    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:19:34.088154    4093 logs.go:276] 2 containers: [be42f13859d1 672093e300cc]
	I0819 04:19:34.088238    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:19:34.100032    4093 logs.go:276] 1 containers: [7bd1561a8a6f]
	I0819 04:19:34.100096    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:19:34.111169    4093 logs.go:276] 2 containers: [d95ed659ab7f 6add09fad9b2]
	I0819 04:19:34.111237    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:19:34.128057    4093 logs.go:276] 1 containers: [bc99c20c6575]
	I0819 04:19:34.128130    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:19:34.138646    4093 logs.go:276] 2 containers: [c08aada44f32 ce491870b40f]
	I0819 04:19:34.138717    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:19:34.153625    4093 logs.go:276] 0 containers: []
	W0819 04:19:34.153637    4093 logs.go:278] No container was found matching "kindnet"
	I0819 04:19:34.153716    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:19:34.164515    4093 logs.go:276] 2 containers: [3e4479afe33e 343dec6784e0]
	I0819 04:19:34.164535    4093 logs.go:123] Gathering logs for dmesg ...
	I0819 04:19:34.164541    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:19:34.169254    4093 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:19:34.169261    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:19:34.204144    4093 logs.go:123] Gathering logs for kube-scheduler [6add09fad9b2] ...
	I0819 04:19:34.204155    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6add09fad9b2"
	I0819 04:19:34.225459    4093 logs.go:123] Gathering logs for kube-controller-manager [c08aada44f32] ...
	I0819 04:19:34.225468    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08aada44f32"
	I0819 04:19:34.247819    4093 logs.go:123] Gathering logs for storage-provisioner [3e4479afe33e] ...
	I0819 04:19:34.247828    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e4479afe33e"
	I0819 04:19:34.259760    4093 logs.go:123] Gathering logs for Docker ...
	I0819 04:19:34.259771    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:19:34.284586    4093 logs.go:123] Gathering logs for container status ...
	I0819 04:19:34.284595    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:19:34.297908    4093 logs.go:123] Gathering logs for kube-apiserver [857a1390fd04] ...
	I0819 04:19:34.297921    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857a1390fd04"
	I0819 04:19:34.313376    4093 logs.go:123] Gathering logs for kube-apiserver [b3b1f57bf431] ...
	I0819 04:19:34.313386    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3b1f57bf431"
	I0819 04:19:34.351299    4093 logs.go:123] Gathering logs for etcd [be42f13859d1] ...
	I0819 04:19:34.351311    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be42f13859d1"
	I0819 04:19:34.365480    4093 logs.go:123] Gathering logs for kube-proxy [bc99c20c6575] ...
	I0819 04:19:34.365492    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc99c20c6575"
	I0819 04:19:34.377433    4093 logs.go:123] Gathering logs for kube-controller-manager [ce491870b40f] ...
	I0819 04:19:34.377444    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce491870b40f"
	I0819 04:19:34.389990    4093 logs.go:123] Gathering logs for storage-provisioner [343dec6784e0] ...
	I0819 04:19:34.390002    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 343dec6784e0"
	I0819 04:19:34.401517    4093 logs.go:123] Gathering logs for kubelet ...
	I0819 04:19:34.401527    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:19:34.439638    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:19:34.439731    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:19:34.440329    4093 logs.go:123] Gathering logs for etcd [672093e300cc] ...
	I0819 04:19:34.440334    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 672093e300cc"
	I0819 04:19:34.455050    4093 logs.go:123] Gathering logs for coredns [7bd1561a8a6f] ...
	I0819 04:19:34.455063    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bd1561a8a6f"
	I0819 04:19:34.465953    4093 logs.go:123] Gathering logs for kube-scheduler [d95ed659ab7f] ...
	I0819 04:19:34.465964    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d95ed659ab7f"
	I0819 04:19:34.482787    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:19:34.482799    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:19:34.482824    4093 out.go:270] X Problems detected in kubelet:
	W0819 04:19:34.482830    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:19:34.482834    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:19:34.482838    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:19:34.482844    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:19:36.182233    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:19:41.184454    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:19:41.184674    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:19:41.200355    3949 logs.go:276] 1 containers: [a0805f9c4c2c]
	I0819 04:19:41.200444    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:19:41.212554    3949 logs.go:276] 1 containers: [8b26c07e9e7f]
	I0819 04:19:41.212622    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:19:41.223661    3949 logs.go:276] 2 containers: [161fcc2cac7e 781c45adfd16]
	I0819 04:19:41.223736    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:19:41.234064    3949 logs.go:276] 1 containers: [ae35457314f6]
	I0819 04:19:41.234132    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:19:41.244851    3949 logs.go:276] 1 containers: [6268fe998982]
	I0819 04:19:41.244920    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:19:41.258930    3949 logs.go:276] 1 containers: [0e2a041f6a1c]
	I0819 04:19:41.259007    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:19:41.269162    3949 logs.go:276] 0 containers: []
	W0819 04:19:41.269173    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:19:41.269230    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:19:41.280026    3949 logs.go:276] 1 containers: [ce9e3ca02329]
	I0819 04:19:41.280044    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:19:41.280051    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:19:41.316978    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:19:41.317072    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:19:41.317618    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:19:41.317716    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	I0819 04:19:41.318998    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:19:41.319007    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:19:41.323364    3949 logs.go:123] Gathering logs for etcd [8b26c07e9e7f] ...
	I0819 04:19:41.323373    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b26c07e9e7f"
	I0819 04:19:41.337030    3949 logs.go:123] Gathering logs for storage-provisioner [ce9e3ca02329] ...
	I0819 04:19:41.337041    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9e3ca02329"
	I0819 04:19:41.348701    3949 logs.go:123] Gathering logs for kube-proxy [6268fe998982] ...
	I0819 04:19:41.348713    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6268fe998982"
	I0819 04:19:41.359986    3949 logs.go:123] Gathering logs for kube-controller-manager [0e2a041f6a1c] ...
	I0819 04:19:41.359999    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2a041f6a1c"
	I0819 04:19:41.377571    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:19:41.377584    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:19:41.402816    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:19:41.402826    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:19:41.437084    3949 logs.go:123] Gathering logs for kube-apiserver [a0805f9c4c2c] ...
	I0819 04:19:41.437098    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0805f9c4c2c"
	I0819 04:19:41.451548    3949 logs.go:123] Gathering logs for coredns [161fcc2cac7e] ...
	I0819 04:19:41.451559    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161fcc2cac7e"
	I0819 04:19:41.463084    3949 logs.go:123] Gathering logs for coredns [781c45adfd16] ...
	I0819 04:19:41.463097    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 781c45adfd16"
	I0819 04:19:41.475056    3949 logs.go:123] Gathering logs for kube-scheduler [ae35457314f6] ...
	I0819 04:19:41.475068    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae35457314f6"
	I0819 04:19:41.490013    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:19:41.490025    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:19:41.502676    3949 out.go:358] Setting ErrFile to fd 2...
	I0819 04:19:41.502685    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:19:41.502711    3949 out.go:270] X Problems detected in kubelet:
	W0819 04:19:41.502718    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:19:41.502725    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:19:41.502729    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:19:41.502732    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	I0819 04:19:41.502738    3949 out.go:358] Setting ErrFile to fd 2...
	I0819 04:19:41.502740    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:19:44.484874    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:19:49.487250    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:19:49.487596    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:19:49.517690    4093 logs.go:276] 2 containers: [857a1390fd04 b3b1f57bf431]
	I0819 04:19:49.517815    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:19:49.535617    4093 logs.go:276] 2 containers: [be42f13859d1 672093e300cc]
	I0819 04:19:49.535713    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:19:49.553514    4093 logs.go:276] 1 containers: [7bd1561a8a6f]
	I0819 04:19:49.553585    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:19:49.571525    4093 logs.go:276] 2 containers: [d95ed659ab7f 6add09fad9b2]
	I0819 04:19:49.571589    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:19:49.582039    4093 logs.go:276] 1 containers: [bc99c20c6575]
	I0819 04:19:49.582097    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:19:49.592560    4093 logs.go:276] 2 containers: [c08aada44f32 ce491870b40f]
	I0819 04:19:49.592638    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:19:49.602449    4093 logs.go:276] 0 containers: []
	W0819 04:19:49.602461    4093 logs.go:278] No container was found matching "kindnet"
	I0819 04:19:49.602524    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:19:49.613146    4093 logs.go:276] 2 containers: [3e4479afe33e 343dec6784e0]
	I0819 04:19:49.613164    4093 logs.go:123] Gathering logs for kubelet ...
	I0819 04:19:49.613169    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:19:49.650750    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:19:49.650846    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:19:49.651452    4093 logs.go:123] Gathering logs for dmesg ...
	I0819 04:19:49.651459    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:19:49.655642    4093 logs.go:123] Gathering logs for coredns [7bd1561a8a6f] ...
	I0819 04:19:49.655653    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bd1561a8a6f"
	I0819 04:19:49.670354    4093 logs.go:123] Gathering logs for etcd [be42f13859d1] ...
	I0819 04:19:49.670367    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be42f13859d1"
	I0819 04:19:49.684526    4093 logs.go:123] Gathering logs for kube-controller-manager [ce491870b40f] ...
	I0819 04:19:49.684537    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce491870b40f"
	I0819 04:19:49.697437    4093 logs.go:123] Gathering logs for storage-provisioner [343dec6784e0] ...
	I0819 04:19:49.697450    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 343dec6784e0"
	I0819 04:19:49.708741    4093 logs.go:123] Gathering logs for kube-controller-manager [c08aada44f32] ...
	I0819 04:19:49.708751    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08aada44f32"
	I0819 04:19:49.725836    4093 logs.go:123] Gathering logs for storage-provisioner [3e4479afe33e] ...
	I0819 04:19:49.725847    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e4479afe33e"
	I0819 04:19:49.737210    4093 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:19:49.737221    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:19:49.772950    4093 logs.go:123] Gathering logs for kube-apiserver [b3b1f57bf431] ...
	I0819 04:19:49.772963    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3b1f57bf431"
	I0819 04:19:49.812428    4093 logs.go:123] Gathering logs for kube-scheduler [d95ed659ab7f] ...
	I0819 04:19:49.812447    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d95ed659ab7f"
	I0819 04:19:49.824461    4093 logs.go:123] Gathering logs for kube-proxy [bc99c20c6575] ...
	I0819 04:19:49.824471    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc99c20c6575"
	I0819 04:19:49.836433    4093 logs.go:123] Gathering logs for container status ...
	I0819 04:19:49.836444    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:19:49.848506    4093 logs.go:123] Gathering logs for kube-apiserver [857a1390fd04] ...
	I0819 04:19:49.848517    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857a1390fd04"
	I0819 04:19:49.865790    4093 logs.go:123] Gathering logs for etcd [672093e300cc] ...
	I0819 04:19:49.865801    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 672093e300cc"
	I0819 04:19:49.880469    4093 logs.go:123] Gathering logs for kube-scheduler [6add09fad9b2] ...
	I0819 04:19:49.880482    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6add09fad9b2"
	I0819 04:19:49.901858    4093 logs.go:123] Gathering logs for Docker ...
	I0819 04:19:49.901870    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:19:49.925117    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:19:49.925128    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:19:49.925154    4093 out.go:270] X Problems detected in kubelet:
	W0819 04:19:49.925159    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:19:49.925162    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:19:49.925168    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:19:49.925171    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:19:51.505990    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:19:56.508498    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:19:56.508695    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:19:56.525774    3949 logs.go:276] 1 containers: [a0805f9c4c2c]
	I0819 04:19:56.525870    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:19:56.538925    3949 logs.go:276] 1 containers: [8b26c07e9e7f]
	I0819 04:19:56.539007    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:19:56.550297    3949 logs.go:276] 2 containers: [161fcc2cac7e 781c45adfd16]
	I0819 04:19:56.550361    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:19:56.563773    3949 logs.go:276] 1 containers: [ae35457314f6]
	I0819 04:19:56.563843    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:19:56.574101    3949 logs.go:276] 1 containers: [6268fe998982]
	I0819 04:19:56.574167    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:19:56.585599    3949 logs.go:276] 1 containers: [0e2a041f6a1c]
	I0819 04:19:56.585669    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:19:56.596486    3949 logs.go:276] 0 containers: []
	W0819 04:19:56.596495    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:19:56.596551    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:19:56.607134    3949 logs.go:276] 1 containers: [ce9e3ca02329]
	I0819 04:19:56.607150    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:19:56.607155    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:19:56.633913    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:19:56.633926    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:19:56.645937    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:19:56.645948    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:19:56.683527    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:19:56.683623    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:19:56.684172    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:19:56.684261    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	I0819 04:19:56.685529    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:19:56.685538    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:19:56.690286    3949 logs.go:123] Gathering logs for kube-scheduler [ae35457314f6] ...
	I0819 04:19:56.690295    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae35457314f6"
	I0819 04:19:56.704798    3949 logs.go:123] Gathering logs for kube-proxy [6268fe998982] ...
	I0819 04:19:56.704809    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6268fe998982"
	I0819 04:19:56.724560    3949 logs.go:123] Gathering logs for kube-controller-manager [0e2a041f6a1c] ...
	I0819 04:19:56.724574    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2a041f6a1c"
	I0819 04:19:56.743582    3949 logs.go:123] Gathering logs for storage-provisioner [ce9e3ca02329] ...
	I0819 04:19:56.743592    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9e3ca02329"
	I0819 04:19:56.755182    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:19:56.755195    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:19:56.790498    3949 logs.go:123] Gathering logs for kube-apiserver [a0805f9c4c2c] ...
	I0819 04:19:56.790508    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0805f9c4c2c"
	I0819 04:19:56.806921    3949 logs.go:123] Gathering logs for etcd [8b26c07e9e7f] ...
	I0819 04:19:56.806932    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b26c07e9e7f"
	I0819 04:19:56.820806    3949 logs.go:123] Gathering logs for coredns [161fcc2cac7e] ...
	I0819 04:19:56.820816    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161fcc2cac7e"
	I0819 04:19:56.833802    3949 logs.go:123] Gathering logs for coredns [781c45adfd16] ...
	I0819 04:19:56.833813    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 781c45adfd16"
	I0819 04:19:56.845891    3949 out.go:358] Setting ErrFile to fd 2...
	I0819 04:19:56.845901    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:19:56.845929    3949 out.go:270] X Problems detected in kubelet:
	W0819 04:19:56.845933    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:19:56.845937    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:19:56.845939    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:19:56.845949    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	I0819 04:19:56.845952    3949 out.go:358] Setting ErrFile to fd 2...
	I0819 04:19:56.845955    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:19:59.927679    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:20:04.929927    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:20:04.930191    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:20:04.951346    4093 logs.go:276] 2 containers: [857a1390fd04 b3b1f57bf431]
	I0819 04:20:04.951447    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:20:04.966572    4093 logs.go:276] 2 containers: [be42f13859d1 672093e300cc]
	I0819 04:20:04.966660    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:20:04.978416    4093 logs.go:276] 1 containers: [7bd1561a8a6f]
	I0819 04:20:04.978487    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:20:04.989414    4093 logs.go:276] 2 containers: [d95ed659ab7f 6add09fad9b2]
	I0819 04:20:04.989482    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:20:04.999523    4093 logs.go:276] 1 containers: [bc99c20c6575]
	I0819 04:20:04.999589    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:20:05.014561    4093 logs.go:276] 2 containers: [c08aada44f32 ce491870b40f]
	I0819 04:20:05.014632    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:20:05.024496    4093 logs.go:276] 0 containers: []
	W0819 04:20:05.024506    4093 logs.go:278] No container was found matching "kindnet"
	I0819 04:20:05.024562    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:20:05.040124    4093 logs.go:276] 2 containers: [3e4479afe33e 343dec6784e0]
	I0819 04:20:05.040144    4093 logs.go:123] Gathering logs for kube-scheduler [6add09fad9b2] ...
	I0819 04:20:05.040150    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6add09fad9b2"
	I0819 04:20:05.065434    4093 logs.go:123] Gathering logs for kube-controller-manager [ce491870b40f] ...
	I0819 04:20:05.065447    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce491870b40f"
	I0819 04:20:05.077740    4093 logs.go:123] Gathering logs for Docker ...
	I0819 04:20:05.077752    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:20:05.102181    4093 logs.go:123] Gathering logs for dmesg ...
	I0819 04:20:05.102188    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:20:05.106215    4093 logs.go:123] Gathering logs for etcd [be42f13859d1] ...
	I0819 04:20:05.106222    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be42f13859d1"
	I0819 04:20:05.120535    4093 logs.go:123] Gathering logs for kube-scheduler [d95ed659ab7f] ...
	I0819 04:20:05.120544    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d95ed659ab7f"
	I0819 04:20:05.132290    4093 logs.go:123] Gathering logs for kube-proxy [bc99c20c6575] ...
	I0819 04:20:05.132300    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc99c20c6575"
	I0819 04:20:05.145031    4093 logs.go:123] Gathering logs for kubelet ...
	I0819 04:20:05.145041    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:20:05.183450    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:20:05.183561    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:20:05.184150    4093 logs.go:123] Gathering logs for kube-apiserver [857a1390fd04] ...
	I0819 04:20:05.184157    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857a1390fd04"
	I0819 04:20:05.200085    4093 logs.go:123] Gathering logs for coredns [7bd1561a8a6f] ...
	I0819 04:20:05.200097    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bd1561a8a6f"
	I0819 04:20:05.211272    4093 logs.go:123] Gathering logs for storage-provisioner [3e4479afe33e] ...
	I0819 04:20:05.211283    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e4479afe33e"
	I0819 04:20:05.222868    4093 logs.go:123] Gathering logs for storage-provisioner [343dec6784e0] ...
	I0819 04:20:05.222877    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 343dec6784e0"
	I0819 04:20:05.234393    4093 logs.go:123] Gathering logs for container status ...
	I0819 04:20:05.234403    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:20:05.246088    4093 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:20:05.246116    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:20:05.280507    4093 logs.go:123] Gathering logs for kube-apiserver [b3b1f57bf431] ...
	I0819 04:20:05.280518    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3b1f57bf431"
	I0819 04:20:05.317006    4093 logs.go:123] Gathering logs for etcd [672093e300cc] ...
	I0819 04:20:05.317018    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 672093e300cc"
	I0819 04:20:05.331720    4093 logs.go:123] Gathering logs for kube-controller-manager [c08aada44f32] ...
	I0819 04:20:05.331733    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08aada44f32"
	I0819 04:20:05.354051    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:20:05.354061    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:20:05.354087    4093 out.go:270] X Problems detected in kubelet:
	W0819 04:20:05.354091    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:20:05.354094    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:20:05.354098    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:20:05.354101    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:20:06.848857    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:20:11.851123    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:20:11.851366    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:20:11.870935    3949 logs.go:276] 1 containers: [a0805f9c4c2c]
	I0819 04:20:11.871043    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:20:11.887429    3949 logs.go:276] 1 containers: [8b26c07e9e7f]
	I0819 04:20:11.887506    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:20:11.898947    3949 logs.go:276] 2 containers: [161fcc2cac7e 781c45adfd16]
	I0819 04:20:11.899019    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:20:11.909187    3949 logs.go:276] 1 containers: [ae35457314f6]
	I0819 04:20:11.909265    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:20:11.919943    3949 logs.go:276] 1 containers: [6268fe998982]
	I0819 04:20:11.920014    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:20:11.930738    3949 logs.go:276] 1 containers: [0e2a041f6a1c]
	I0819 04:20:11.930816    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:20:11.940986    3949 logs.go:276] 0 containers: []
	W0819 04:20:11.941002    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:20:11.941066    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:20:11.953392    3949 logs.go:276] 1 containers: [ce9e3ca02329]
	I0819 04:20:11.953408    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:20:11.953414    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:20:11.970975    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:20:11.970986    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:20:11.975273    3949 logs.go:123] Gathering logs for etcd [8b26c07e9e7f] ...
	I0819 04:20:11.975282    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b26c07e9e7f"
	I0819 04:20:11.989448    3949 logs.go:123] Gathering logs for kube-controller-manager [0e2a041f6a1c] ...
	I0819 04:20:11.989461    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2a041f6a1c"
	I0819 04:20:12.009679    3949 logs.go:123] Gathering logs for storage-provisioner [ce9e3ca02329] ...
	I0819 04:20:12.009690    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9e3ca02329"
	I0819 04:20:12.020911    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:20:12.020924    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:20:12.045538    3949 logs.go:123] Gathering logs for kube-scheduler [ae35457314f6] ...
	I0819 04:20:12.045549    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae35457314f6"
	I0819 04:20:12.060245    3949 logs.go:123] Gathering logs for kube-proxy [6268fe998982] ...
	I0819 04:20:12.060257    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6268fe998982"
	I0819 04:20:12.072097    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:20:12.072106    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:20:12.109810    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:20:12.109906    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:20:12.110443    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:20:12.110531    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	I0819 04:20:12.111820    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:20:12.111826    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:20:12.148577    3949 logs.go:123] Gathering logs for kube-apiserver [a0805f9c4c2c] ...
	I0819 04:20:12.148591    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0805f9c4c2c"
	I0819 04:20:12.162944    3949 logs.go:123] Gathering logs for coredns [161fcc2cac7e] ...
	I0819 04:20:12.162954    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161fcc2cac7e"
	I0819 04:20:12.174509    3949 logs.go:123] Gathering logs for coredns [781c45adfd16] ...
	I0819 04:20:12.174519    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 781c45adfd16"
	I0819 04:20:12.187470    3949 out.go:358] Setting ErrFile to fd 2...
	I0819 04:20:12.187480    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:20:12.187507    3949 out.go:270] X Problems detected in kubelet:
	W0819 04:20:12.187512    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:20:12.187516    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:20:12.187521    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:20:12.187524    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	I0819 04:20:12.187527    3949 out.go:358] Setting ErrFile to fd 2...
	I0819 04:20:12.187540    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:20:15.358219    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:20:20.360782    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:20:20.360995    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:20:20.377552    4093 logs.go:276] 2 containers: [857a1390fd04 b3b1f57bf431]
	I0819 04:20:20.377640    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:20:20.393174    4093 logs.go:276] 2 containers: [be42f13859d1 672093e300cc]
	I0819 04:20:20.393254    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:20:20.404901    4093 logs.go:276] 1 containers: [7bd1561a8a6f]
	I0819 04:20:20.404981    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:20:20.415785    4093 logs.go:276] 2 containers: [d95ed659ab7f 6add09fad9b2]
	I0819 04:20:20.415860    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:20:20.426635    4093 logs.go:276] 1 containers: [bc99c20c6575]
	I0819 04:20:20.426699    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:20:20.437226    4093 logs.go:276] 2 containers: [c08aada44f32 ce491870b40f]
	I0819 04:20:20.437297    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:20:20.447338    4093 logs.go:276] 0 containers: []
	W0819 04:20:20.447348    4093 logs.go:278] No container was found matching "kindnet"
	I0819 04:20:20.447401    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:20:20.457710    4093 logs.go:276] 2 containers: [3e4479afe33e 343dec6784e0]
	I0819 04:20:20.457726    4093 logs.go:123] Gathering logs for kubelet ...
	I0819 04:20:20.457731    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:20:20.496348    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:20:20.496439    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:20:20.497002    4093 logs.go:123] Gathering logs for kube-scheduler [d95ed659ab7f] ...
	I0819 04:20:20.497006    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d95ed659ab7f"
	I0819 04:20:20.508556    4093 logs.go:123] Gathering logs for kube-controller-manager [c08aada44f32] ...
	I0819 04:20:20.508567    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08aada44f32"
	I0819 04:20:20.529903    4093 logs.go:123] Gathering logs for kube-apiserver [b3b1f57bf431] ...
	I0819 04:20:20.529918    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3b1f57bf431"
	I0819 04:20:20.567714    4093 logs.go:123] Gathering logs for kube-controller-manager [ce491870b40f] ...
	I0819 04:20:20.567725    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce491870b40f"
	I0819 04:20:20.579798    4093 logs.go:123] Gathering logs for storage-provisioner [343dec6784e0] ...
	I0819 04:20:20.579808    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 343dec6784e0"
	I0819 04:20:20.591267    4093 logs.go:123] Gathering logs for etcd [be42f13859d1] ...
	I0819 04:20:20.591277    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be42f13859d1"
	I0819 04:20:20.604645    4093 logs.go:123] Gathering logs for storage-provisioner [3e4479afe33e] ...
	I0819 04:20:20.604657    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e4479afe33e"
	I0819 04:20:20.616288    4093 logs.go:123] Gathering logs for container status ...
	I0819 04:20:20.616303    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:20:20.628562    4093 logs.go:123] Gathering logs for dmesg ...
	I0819 04:20:20.628573    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:20:20.632694    4093 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:20:20.632701    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:20:20.669836    4093 logs.go:123] Gathering logs for kube-apiserver [857a1390fd04] ...
	I0819 04:20:20.669846    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857a1390fd04"
	I0819 04:20:20.690957    4093 logs.go:123] Gathering logs for kube-proxy [bc99c20c6575] ...
	I0819 04:20:20.690970    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc99c20c6575"
	I0819 04:20:20.702642    4093 logs.go:123] Gathering logs for Docker ...
	I0819 04:20:20.702653    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:20:20.726351    4093 logs.go:123] Gathering logs for etcd [672093e300cc] ...
	I0819 04:20:20.726359    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 672093e300cc"
	I0819 04:20:20.762769    4093 logs.go:123] Gathering logs for coredns [7bd1561a8a6f] ...
	I0819 04:20:20.762786    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bd1561a8a6f"
	I0819 04:20:20.784640    4093 logs.go:123] Gathering logs for kube-scheduler [6add09fad9b2] ...
	I0819 04:20:20.784652    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6add09fad9b2"
	I0819 04:20:20.805532    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:20:20.805542    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:20:20.805572    4093 out.go:270] X Problems detected in kubelet:
	W0819 04:20:20.805578    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:20:20.805581    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:20:20.805584    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:20:20.805587    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:20:22.191546    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:20:27.193807    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:20:27.194172    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:20:27.225197    3949 logs.go:276] 1 containers: [a0805f9c4c2c]
	I0819 04:20:27.225335    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:20:27.244022    3949 logs.go:276] 1 containers: [8b26c07e9e7f]
	I0819 04:20:27.244116    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:20:27.260363    3949 logs.go:276] 4 containers: [b8387e4e1e6c 76bba5139c4a 161fcc2cac7e 781c45adfd16]
	I0819 04:20:27.260428    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:20:27.271792    3949 logs.go:276] 1 containers: [ae35457314f6]
	I0819 04:20:27.271866    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:20:27.287132    3949 logs.go:276] 1 containers: [6268fe998982]
	I0819 04:20:27.287193    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:20:27.297405    3949 logs.go:276] 1 containers: [0e2a041f6a1c]
	I0819 04:20:27.297467    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:20:27.307712    3949 logs.go:276] 0 containers: []
	W0819 04:20:27.307721    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:20:27.307777    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:20:27.318581    3949 logs.go:276] 1 containers: [ce9e3ca02329]
	I0819 04:20:27.318597    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:20:27.318603    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:20:27.357433    3949 logs.go:123] Gathering logs for kube-apiserver [a0805f9c4c2c] ...
	I0819 04:20:27.357446    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0805f9c4c2c"
	I0819 04:20:27.371706    3949 logs.go:123] Gathering logs for coredns [76bba5139c4a] ...
	I0819 04:20:27.371719    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bba5139c4a"
	I0819 04:20:27.383516    3949 logs.go:123] Gathering logs for storage-provisioner [ce9e3ca02329] ...
	I0819 04:20:27.383528    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9e3ca02329"
	I0819 04:20:27.395366    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:20:27.395381    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:20:27.407471    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:20:27.407481    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:20:27.444728    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:20:27.444821    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:20:27.445337    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:20:27.445425    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	I0819 04:20:27.446717    3949 logs.go:123] Gathering logs for coredns [b8387e4e1e6c] ...
	I0819 04:20:27.446722    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8387e4e1e6c"
	I0819 04:20:27.463828    3949 logs.go:123] Gathering logs for coredns [161fcc2cac7e] ...
	I0819 04:20:27.463839    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161fcc2cac7e"
	I0819 04:20:27.475791    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:20:27.475803    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:20:27.500034    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:20:27.500042    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:20:27.504692    3949 logs.go:123] Gathering logs for etcd [8b26c07e9e7f] ...
	I0819 04:20:27.504701    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b26c07e9e7f"
	I0819 04:20:27.521580    3949 logs.go:123] Gathering logs for coredns [781c45adfd16] ...
	I0819 04:20:27.521592    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 781c45adfd16"
	I0819 04:20:27.533538    3949 logs.go:123] Gathering logs for kube-scheduler [ae35457314f6] ...
	I0819 04:20:27.533552    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae35457314f6"
	I0819 04:20:27.547937    3949 logs.go:123] Gathering logs for kube-proxy [6268fe998982] ...
	I0819 04:20:27.547950    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6268fe998982"
	I0819 04:20:27.559953    3949 logs.go:123] Gathering logs for kube-controller-manager [0e2a041f6a1c] ...
	I0819 04:20:27.559964    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2a041f6a1c"
	I0819 04:20:27.579596    3949 out.go:358] Setting ErrFile to fd 2...
	I0819 04:20:27.579608    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:20:27.579632    3949 out.go:270] X Problems detected in kubelet:
	W0819 04:20:27.579638    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:20:27.579642    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:20:27.579646    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:20:27.579649    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	I0819 04:20:27.579652    3949 out.go:358] Setting ErrFile to fd 2...
	I0819 04:20:27.579654    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:20:30.809605    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:20:35.811893    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:20:35.812144    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:20:35.835262    4093 logs.go:276] 2 containers: [857a1390fd04 b3b1f57bf431]
	I0819 04:20:35.835386    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:20:35.853842    4093 logs.go:276] 2 containers: [be42f13859d1 672093e300cc]
	I0819 04:20:35.853916    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:20:35.867270    4093 logs.go:276] 1 containers: [7bd1561a8a6f]
	I0819 04:20:35.867349    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:20:35.878740    4093 logs.go:276] 2 containers: [d95ed659ab7f 6add09fad9b2]
	I0819 04:20:35.878807    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:20:35.889175    4093 logs.go:276] 1 containers: [bc99c20c6575]
	I0819 04:20:35.889237    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:20:35.899855    4093 logs.go:276] 2 containers: [c08aada44f32 ce491870b40f]
	I0819 04:20:35.899927    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:20:35.910096    4093 logs.go:276] 0 containers: []
	W0819 04:20:35.910107    4093 logs.go:278] No container was found matching "kindnet"
	I0819 04:20:35.910163    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:20:35.920889    4093 logs.go:276] 2 containers: [3e4479afe33e 343dec6784e0]
	I0819 04:20:35.920918    4093 logs.go:123] Gathering logs for etcd [672093e300cc] ...
	I0819 04:20:35.920923    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 672093e300cc"
	I0819 04:20:35.935130    4093 logs.go:123] Gathering logs for dmesg ...
	I0819 04:20:35.935140    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:20:35.939240    4093 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:20:35.939250    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:20:35.973499    4093 logs.go:123] Gathering logs for storage-provisioner [343dec6784e0] ...
	I0819 04:20:35.973511    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 343dec6784e0"
	I0819 04:20:35.985036    4093 logs.go:123] Gathering logs for container status ...
	I0819 04:20:35.985048    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:20:35.997307    4093 logs.go:123] Gathering logs for kube-apiserver [857a1390fd04] ...
	I0819 04:20:35.997321    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857a1390fd04"
	I0819 04:20:36.013059    4093 logs.go:123] Gathering logs for kube-scheduler [d95ed659ab7f] ...
	I0819 04:20:36.013072    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d95ed659ab7f"
	I0819 04:20:36.025103    4093 logs.go:123] Gathering logs for kube-proxy [bc99c20c6575] ...
	I0819 04:20:36.025112    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc99c20c6575"
	I0819 04:20:36.037199    4093 logs.go:123] Gathering logs for kube-controller-manager [c08aada44f32] ...
	I0819 04:20:36.037209    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08aada44f32"
	I0819 04:20:36.054705    4093 logs.go:123] Gathering logs for kube-controller-manager [ce491870b40f] ...
	I0819 04:20:36.054716    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce491870b40f"
	I0819 04:20:36.067161    4093 logs.go:123] Gathering logs for Docker ...
	I0819 04:20:36.067172    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:20:36.090777    4093 logs.go:123] Gathering logs for kube-apiserver [b3b1f57bf431] ...
	I0819 04:20:36.090784    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3b1f57bf431"
	I0819 04:20:36.134624    4093 logs.go:123] Gathering logs for coredns [7bd1561a8a6f] ...
	I0819 04:20:36.134642    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bd1561a8a6f"
	I0819 04:20:36.146380    4093 logs.go:123] Gathering logs for kube-scheduler [6add09fad9b2] ...
	I0819 04:20:36.146395    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6add09fad9b2"
	I0819 04:20:36.168040    4093 logs.go:123] Gathering logs for storage-provisioner [3e4479afe33e] ...
	I0819 04:20:36.168051    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e4479afe33e"
	I0819 04:20:36.178966    4093 logs.go:123] Gathering logs for kubelet ...
	I0819 04:20:36.178976    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:20:36.217984    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:20:36.218078    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:20:36.218662    4093 logs.go:123] Gathering logs for etcd [be42f13859d1] ...
	I0819 04:20:36.218668    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be42f13859d1"
	I0819 04:20:36.236425    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:20:36.236434    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:20:36.236459    4093 out.go:270] X Problems detected in kubelet:
	W0819 04:20:36.236464    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:20:36.236468    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:20:36.236472    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:20:36.236475    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:20:37.582722    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:20:42.584141    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:20:42.584374    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:20:42.604587    3949 logs.go:276] 1 containers: [a0805f9c4c2c]
	I0819 04:20:42.604673    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:20:42.618782    3949 logs.go:276] 1 containers: [8b26c07e9e7f]
	I0819 04:20:42.618863    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:20:42.630925    3949 logs.go:276] 4 containers: [b8387e4e1e6c 76bba5139c4a 161fcc2cac7e 781c45adfd16]
	I0819 04:20:42.630991    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:20:42.642266    3949 logs.go:276] 1 containers: [ae35457314f6]
	I0819 04:20:42.642340    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:20:42.653920    3949 logs.go:276] 1 containers: [6268fe998982]
	I0819 04:20:42.653989    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:20:42.664633    3949 logs.go:276] 1 containers: [0e2a041f6a1c]
	I0819 04:20:42.664698    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:20:42.675183    3949 logs.go:276] 0 containers: []
	W0819 04:20:42.675195    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:20:42.675260    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:20:42.686358    3949 logs.go:276] 1 containers: [ce9e3ca02329]
	I0819 04:20:42.686374    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:20:42.686379    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:20:42.691663    3949 logs.go:123] Gathering logs for kube-scheduler [ae35457314f6] ...
	I0819 04:20:42.691673    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae35457314f6"
	I0819 04:20:42.709192    3949 logs.go:123] Gathering logs for etcd [8b26c07e9e7f] ...
	I0819 04:20:42.709204    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b26c07e9e7f"
	I0819 04:20:42.722838    3949 logs.go:123] Gathering logs for coredns [76bba5139c4a] ...
	I0819 04:20:42.722850    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bba5139c4a"
	I0819 04:20:42.734155    3949 logs.go:123] Gathering logs for kube-proxy [6268fe998982] ...
	I0819 04:20:42.734168    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6268fe998982"
	I0819 04:20:42.745918    3949 logs.go:123] Gathering logs for storage-provisioner [ce9e3ca02329] ...
	I0819 04:20:42.745932    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9e3ca02329"
	I0819 04:20:42.757680    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:20:42.757693    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:20:42.793336    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:20:42.793436    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:20:42.793984    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:20:42.794076    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	I0819 04:20:42.795393    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:20:42.795399    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:20:42.831182    3949 logs.go:123] Gathering logs for kube-apiserver [a0805f9c4c2c] ...
	I0819 04:20:42.831192    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0805f9c4c2c"
	I0819 04:20:42.845607    3949 logs.go:123] Gathering logs for kube-controller-manager [0e2a041f6a1c] ...
	I0819 04:20:42.845618    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2a041f6a1c"
	I0819 04:20:42.862702    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:20:42.862721    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:20:42.890311    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:20:42.890327    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:20:42.903726    3949 logs.go:123] Gathering logs for coredns [b8387e4e1e6c] ...
	I0819 04:20:42.903738    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8387e4e1e6c"
	I0819 04:20:42.921252    3949 logs.go:123] Gathering logs for coredns [161fcc2cac7e] ...
	I0819 04:20:42.921265    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161fcc2cac7e"
	I0819 04:20:42.933243    3949 logs.go:123] Gathering logs for coredns [781c45adfd16] ...
	I0819 04:20:42.933251    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 781c45adfd16"
	I0819 04:20:42.944836    3949 out.go:358] Setting ErrFile to fd 2...
	I0819 04:20:42.944847    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:20:42.944873    3949 out.go:270] X Problems detected in kubelet:
	W0819 04:20:42.944878    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:20:42.944881    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:20:42.944887    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:20:42.944901    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	I0819 04:20:42.944905    3949 out.go:358] Setting ErrFile to fd 2...
	I0819 04:20:42.944908    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:20:46.240509    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:20:51.242816    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:20:51.242925    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:20:51.254220    4093 logs.go:276] 2 containers: [857a1390fd04 b3b1f57bf431]
	I0819 04:20:51.254286    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:20:51.265023    4093 logs.go:276] 2 containers: [be42f13859d1 672093e300cc]
	I0819 04:20:51.265106    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:20:51.275204    4093 logs.go:276] 1 containers: [7bd1561a8a6f]
	I0819 04:20:51.275280    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:20:51.286135    4093 logs.go:276] 2 containers: [d95ed659ab7f 6add09fad9b2]
	I0819 04:20:51.286212    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:20:51.296187    4093 logs.go:276] 1 containers: [bc99c20c6575]
	I0819 04:20:51.296256    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:20:51.306757    4093 logs.go:276] 2 containers: [c08aada44f32 ce491870b40f]
	I0819 04:20:51.306844    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:20:51.316918    4093 logs.go:276] 0 containers: []
	W0819 04:20:51.316932    4093 logs.go:278] No container was found matching "kindnet"
	I0819 04:20:51.317004    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:20:51.326864    4093 logs.go:276] 2 containers: [3e4479afe33e 343dec6784e0]
	I0819 04:20:51.326881    4093 logs.go:123] Gathering logs for kube-apiserver [b3b1f57bf431] ...
	I0819 04:20:51.326887    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3b1f57bf431"
	I0819 04:20:51.367266    4093 logs.go:123] Gathering logs for etcd [672093e300cc] ...
	I0819 04:20:51.367276    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 672093e300cc"
	I0819 04:20:51.382356    4093 logs.go:123] Gathering logs for kube-controller-manager [c08aada44f32] ...
	I0819 04:20:51.382370    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08aada44f32"
	I0819 04:20:51.400378    4093 logs.go:123] Gathering logs for storage-provisioner [343dec6784e0] ...
	I0819 04:20:51.400390    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 343dec6784e0"
	I0819 04:20:51.413072    4093 logs.go:123] Gathering logs for container status ...
	I0819 04:20:51.413086    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:20:51.425645    4093 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:20:51.425659    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:20:51.460110    4093 logs.go:123] Gathering logs for kube-apiserver [857a1390fd04] ...
	I0819 04:20:51.460121    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857a1390fd04"
	I0819 04:20:51.474323    4093 logs.go:123] Gathering logs for coredns [7bd1561a8a6f] ...
	I0819 04:20:51.474334    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bd1561a8a6f"
	I0819 04:20:51.486517    4093 logs.go:123] Gathering logs for kubelet ...
	I0819 04:20:51.486529    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:20:51.522644    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:20:51.522738    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:20:51.523307    4093 logs.go:123] Gathering logs for kube-controller-manager [ce491870b40f] ...
	I0819 04:20:51.523314    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce491870b40f"
	I0819 04:20:51.535879    4093 logs.go:123] Gathering logs for storage-provisioner [3e4479afe33e] ...
	I0819 04:20:51.535891    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e4479afe33e"
	I0819 04:20:51.551463    4093 logs.go:123] Gathering logs for Docker ...
	I0819 04:20:51.551474    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:20:51.575611    4093 logs.go:123] Gathering logs for dmesg ...
	I0819 04:20:51.575620    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:20:51.579772    4093 logs.go:123] Gathering logs for etcd [be42f13859d1] ...
	I0819 04:20:51.579778    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be42f13859d1"
	I0819 04:20:51.593135    4093 logs.go:123] Gathering logs for kube-scheduler [d95ed659ab7f] ...
	I0819 04:20:51.593144    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d95ed659ab7f"
	I0819 04:20:51.604882    4093 logs.go:123] Gathering logs for kube-scheduler [6add09fad9b2] ...
	I0819 04:20:51.604893    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6add09fad9b2"
	I0819 04:20:51.625595    4093 logs.go:123] Gathering logs for kube-proxy [bc99c20c6575] ...
	I0819 04:20:51.625609    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc99c20c6575"
	I0819 04:20:51.648378    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:20:51.648391    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:20:51.648421    4093 out.go:270] X Problems detected in kubelet:
	W0819 04:20:51.648427    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:20:51.648431    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:20:51.648436    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:20:51.648446    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:20:52.948920    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:20:57.951331    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:20:57.951785    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:20:57.989059    3949 logs.go:276] 1 containers: [a0805f9c4c2c]
	I0819 04:20:57.989246    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:20:58.009347    3949 logs.go:276] 1 containers: [8b26c07e9e7f]
	I0819 04:20:58.009445    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:20:58.024320    3949 logs.go:276] 4 containers: [b8387e4e1e6c 76bba5139c4a 161fcc2cac7e 781c45adfd16]
	I0819 04:20:58.024403    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:20:58.038621    3949 logs.go:276] 1 containers: [ae35457314f6]
	I0819 04:20:58.038686    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:20:58.050065    3949 logs.go:276] 1 containers: [6268fe998982]
	I0819 04:20:58.050133    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:20:58.061404    3949 logs.go:276] 1 containers: [0e2a041f6a1c]
	I0819 04:20:58.061477    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:20:58.072085    3949 logs.go:276] 0 containers: []
	W0819 04:20:58.072096    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:20:58.072157    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:20:58.083011    3949 logs.go:276] 1 containers: [ce9e3ca02329]
	I0819 04:20:58.083028    3949 logs.go:123] Gathering logs for coredns [b8387e4e1e6c] ...
	I0819 04:20:58.083033    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8387e4e1e6c"
	I0819 04:20:58.104312    3949 logs.go:123] Gathering logs for coredns [781c45adfd16] ...
	I0819 04:20:58.104326    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 781c45adfd16"
	I0819 04:20:58.117156    3949 logs.go:123] Gathering logs for kube-scheduler [ae35457314f6] ...
	I0819 04:20:58.117166    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae35457314f6"
	I0819 04:20:58.132471    3949 logs.go:123] Gathering logs for kube-proxy [6268fe998982] ...
	I0819 04:20:58.132480    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6268fe998982"
	I0819 04:20:58.144605    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:20:58.144614    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:20:58.169871    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:20:58.169881    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:20:58.184248    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:20:58.184260    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:20:58.224610    3949 logs.go:123] Gathering logs for coredns [76bba5139c4a] ...
	I0819 04:20:58.224625    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bba5139c4a"
	I0819 04:20:58.236498    3949 logs.go:123] Gathering logs for storage-provisioner [ce9e3ca02329] ...
	I0819 04:20:58.236506    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9e3ca02329"
	I0819 04:20:58.252258    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:20:58.252280    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:20:58.290635    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:20:58.290732    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:20:58.291282    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:20:58.291369    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	I0819 04:20:58.292618    3949 logs.go:123] Gathering logs for kube-apiserver [a0805f9c4c2c] ...
	I0819 04:20:58.292623    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0805f9c4c2c"
	I0819 04:20:58.306701    3949 logs.go:123] Gathering logs for etcd [8b26c07e9e7f] ...
	I0819 04:20:58.306714    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b26c07e9e7f"
	I0819 04:20:58.327678    3949 logs.go:123] Gathering logs for coredns [161fcc2cac7e] ...
	I0819 04:20:58.327692    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161fcc2cac7e"
	I0819 04:20:58.339527    3949 logs.go:123] Gathering logs for kube-controller-manager [0e2a041f6a1c] ...
	I0819 04:20:58.339538    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2a041f6a1c"
	I0819 04:20:58.357566    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:20:58.357576    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:20:58.362729    3949 out.go:358] Setting ErrFile to fd 2...
	I0819 04:20:58.362737    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:20:58.362763    3949 out.go:270] X Problems detected in kubelet:
	W0819 04:20:58.362768    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:20:58.362771    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:20:58.362774    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:20:58.362777    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	I0819 04:20:58.362780    3949 out.go:358] Setting ErrFile to fd 2...
	I0819 04:20:58.362783    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:21:01.650823    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:21:06.653028    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:21:06.653371    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:21:06.677405    4093 logs.go:276] 2 containers: [857a1390fd04 b3b1f57bf431]
	I0819 04:21:06.677531    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:21:06.693607    4093 logs.go:276] 2 containers: [be42f13859d1 672093e300cc]
	I0819 04:21:06.693699    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:21:06.706488    4093 logs.go:276] 1 containers: [7bd1561a8a6f]
	I0819 04:21:06.706564    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:21:06.717397    4093 logs.go:276] 2 containers: [d95ed659ab7f 6add09fad9b2]
	I0819 04:21:06.717470    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:21:06.728056    4093 logs.go:276] 1 containers: [bc99c20c6575]
	I0819 04:21:06.728129    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:21:06.738729    4093 logs.go:276] 2 containers: [c08aada44f32 ce491870b40f]
	I0819 04:21:06.738801    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:21:06.748830    4093 logs.go:276] 0 containers: []
	W0819 04:21:06.748842    4093 logs.go:278] No container was found matching "kindnet"
	I0819 04:21:06.748906    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:21:06.759227    4093 logs.go:276] 2 containers: [3e4479afe33e 343dec6784e0]
	I0819 04:21:06.759243    4093 logs.go:123] Gathering logs for kube-proxy [bc99c20c6575] ...
	I0819 04:21:06.759249    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc99c20c6575"
	I0819 04:21:08.366786    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:21:06.771128    4093 logs.go:123] Gathering logs for kube-controller-manager [ce491870b40f] ...
	I0819 04:21:06.771139    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce491870b40f"
	I0819 04:21:06.784011    4093 logs.go:123] Gathering logs for storage-provisioner [3e4479afe33e] ...
	I0819 04:21:06.784020    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e4479afe33e"
	I0819 04:21:06.796016    4093 logs.go:123] Gathering logs for container status ...
	I0819 04:21:06.796028    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:21:06.808038    4093 logs.go:123] Gathering logs for kube-apiserver [b3b1f57bf431] ...
	I0819 04:21:06.808049    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3b1f57bf431"
	I0819 04:21:06.848961    4093 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:21:06.848970    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:21:06.884344    4093 logs.go:123] Gathering logs for etcd [be42f13859d1] ...
	I0819 04:21:06.884358    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be42f13859d1"
	I0819 04:21:06.899760    4093 logs.go:123] Gathering logs for kube-scheduler [6add09fad9b2] ...
	I0819 04:21:06.899771    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6add09fad9b2"
	I0819 04:21:06.921485    4093 logs.go:123] Gathering logs for storage-provisioner [343dec6784e0] ...
	I0819 04:21:06.921495    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 343dec6784e0"
	I0819 04:21:06.932676    4093 logs.go:123] Gathering logs for Docker ...
	I0819 04:21:06.932688    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:21:06.955777    4093 logs.go:123] Gathering logs for dmesg ...
	I0819 04:21:06.955783    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:21:06.959693    4093 logs.go:123] Gathering logs for etcd [672093e300cc] ...
	I0819 04:21:06.959699    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 672093e300cc"
	I0819 04:21:06.973772    4093 logs.go:123] Gathering logs for kubelet ...
	I0819 04:21:06.973783    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:21:07.010388    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:21:07.010479    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:21:07.011044    4093 logs.go:123] Gathering logs for coredns [7bd1561a8a6f] ...
	I0819 04:21:07.011048    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bd1561a8a6f"
	I0819 04:21:07.022209    4093 logs.go:123] Gathering logs for kube-scheduler [d95ed659ab7f] ...
	I0819 04:21:07.022221    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d95ed659ab7f"
	I0819 04:21:07.034151    4093 logs.go:123] Gathering logs for kube-controller-manager [c08aada44f32] ...
	I0819 04:21:07.034164    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08aada44f32"
	I0819 04:21:07.051420    4093 logs.go:123] Gathering logs for kube-apiserver [857a1390fd04] ...
	I0819 04:21:07.051434    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857a1390fd04"
	I0819 04:21:07.083376    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:21:07.083390    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:21:07.083421    4093 out.go:270] X Problems detected in kubelet:
	W0819 04:21:07.083427    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:21:07.083442    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:21:07.083446    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:21:07.083449    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:21:13.368715    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:21:13.368938    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:21:13.404327    3949 logs.go:276] 1 containers: [a0805f9c4c2c]
	I0819 04:21:13.404412    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:21:13.416509    3949 logs.go:276] 1 containers: [8b26c07e9e7f]
	I0819 04:21:13.416581    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:21:13.427027    3949 logs.go:276] 4 containers: [b8387e4e1e6c 76bba5139c4a 161fcc2cac7e 781c45adfd16]
	I0819 04:21:13.427101    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:21:13.437633    3949 logs.go:276] 1 containers: [ae35457314f6]
	I0819 04:21:13.437698    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:21:13.448328    3949 logs.go:276] 1 containers: [6268fe998982]
	I0819 04:21:13.448401    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:21:13.462656    3949 logs.go:276] 1 containers: [0e2a041f6a1c]
	I0819 04:21:13.462718    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:21:13.472956    3949 logs.go:276] 0 containers: []
	W0819 04:21:13.472968    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:21:13.473028    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:21:13.483602    3949 logs.go:276] 1 containers: [ce9e3ca02329]
	I0819 04:21:13.483622    3949 logs.go:123] Gathering logs for coredns [76bba5139c4a] ...
	I0819 04:21:13.483629    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bba5139c4a"
	I0819 04:21:13.495074    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:21:13.495088    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:21:13.518455    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:21:13.518464    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:21:13.558506    3949 logs.go:123] Gathering logs for coredns [b8387e4e1e6c] ...
	I0819 04:21:13.558518    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8387e4e1e6c"
	I0819 04:21:13.579791    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:21:13.579801    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:21:13.617327    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:21:13.617426    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:21:13.617944    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:21:13.618031    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	I0819 04:21:13.619255    3949 logs.go:123] Gathering logs for kube-scheduler [ae35457314f6] ...
	I0819 04:21:13.619260    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae35457314f6"
	I0819 04:21:13.633752    3949 logs.go:123] Gathering logs for coredns [161fcc2cac7e] ...
	I0819 04:21:13.633767    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161fcc2cac7e"
	I0819 04:21:13.646115    3949 logs.go:123] Gathering logs for coredns [781c45adfd16] ...
	I0819 04:21:13.646126    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 781c45adfd16"
	I0819 04:21:13.662614    3949 logs.go:123] Gathering logs for kube-proxy [6268fe998982] ...
	I0819 04:21:13.662626    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6268fe998982"
	I0819 04:21:13.674123    3949 logs.go:123] Gathering logs for kube-controller-manager [0e2a041f6a1c] ...
	I0819 04:21:13.674137    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2a041f6a1c"
	I0819 04:21:13.691606    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:21:13.691616    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:21:13.706895    3949 logs.go:123] Gathering logs for kube-apiserver [a0805f9c4c2c] ...
	I0819 04:21:13.706906    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0805f9c4c2c"
	I0819 04:21:13.723794    3949 logs.go:123] Gathering logs for etcd [8b26c07e9e7f] ...
	I0819 04:21:13.723805    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b26c07e9e7f"
	I0819 04:21:13.738359    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:21:13.738370    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:21:13.742812    3949 logs.go:123] Gathering logs for storage-provisioner [ce9e3ca02329] ...
	I0819 04:21:13.742820    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9e3ca02329"
	I0819 04:21:13.755466    3949 out.go:358] Setting ErrFile to fd 2...
	I0819 04:21:13.755478    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:21:13.755505    3949 out.go:270] X Problems detected in kubelet:
	W0819 04:21:13.755511    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:21:13.755513    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:21:13.755517    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:21:13.755520    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	I0819 04:21:13.755523    3949 out.go:358] Setting ErrFile to fd 2...
	I0819 04:21:13.755526    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:21:17.087603    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:21:23.759276    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:21:22.090087    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:21:22.090213    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:21:22.101957    4093 logs.go:276] 2 containers: [857a1390fd04 b3b1f57bf431]
	I0819 04:21:22.102035    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:21:22.113177    4093 logs.go:276] 2 containers: [be42f13859d1 672093e300cc]
	I0819 04:21:22.113243    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:21:22.123701    4093 logs.go:276] 1 containers: [7bd1561a8a6f]
	I0819 04:21:22.123771    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:21:22.133882    4093 logs.go:276] 2 containers: [d95ed659ab7f 6add09fad9b2]
	I0819 04:21:22.133946    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:21:22.144551    4093 logs.go:276] 1 containers: [bc99c20c6575]
	I0819 04:21:22.144612    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:21:22.155108    4093 logs.go:276] 2 containers: [c08aada44f32 ce491870b40f]
	I0819 04:21:22.155178    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:21:22.165309    4093 logs.go:276] 0 containers: []
	W0819 04:21:22.165320    4093 logs.go:278] No container was found matching "kindnet"
	I0819 04:21:22.165379    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:21:22.175936    4093 logs.go:276] 2 containers: [3e4479afe33e 343dec6784e0]
	I0819 04:21:22.175953    4093 logs.go:123] Gathering logs for kube-apiserver [b3b1f57bf431] ...
	I0819 04:21:22.175958    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3b1f57bf431"
	I0819 04:21:22.212730    4093 logs.go:123] Gathering logs for kube-proxy [bc99c20c6575] ...
	I0819 04:21:22.212740    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc99c20c6575"
	I0819 04:21:22.224689    4093 logs.go:123] Gathering logs for kube-controller-manager [ce491870b40f] ...
	I0819 04:21:22.224697    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce491870b40f"
	I0819 04:21:22.241755    4093 logs.go:123] Gathering logs for kubelet ...
	I0819 04:21:22.241765    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:21:22.280400    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:21:22.280493    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:21:22.281093    4093 logs.go:123] Gathering logs for etcd [be42f13859d1] ...
	I0819 04:21:22.281102    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be42f13859d1"
	I0819 04:21:22.298846    4093 logs.go:123] Gathering logs for etcd [672093e300cc] ...
	I0819 04:21:22.298856    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 672093e300cc"
	I0819 04:21:22.313397    4093 logs.go:123] Gathering logs for container status ...
	I0819 04:21:22.313408    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:21:22.326299    4093 logs.go:123] Gathering logs for dmesg ...
	I0819 04:21:22.326309    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:21:22.330445    4093 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:21:22.330453    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:21:22.366201    4093 logs.go:123] Gathering logs for kube-scheduler [d95ed659ab7f] ...
	I0819 04:21:22.366213    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d95ed659ab7f"
	I0819 04:21:22.378451    4093 logs.go:123] Gathering logs for kube-scheduler [6add09fad9b2] ...
	I0819 04:21:22.378461    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6add09fad9b2"
	I0819 04:21:22.401063    4093 logs.go:123] Gathering logs for kube-controller-manager [c08aada44f32] ...
	I0819 04:21:22.401076    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08aada44f32"
	I0819 04:21:22.419661    4093 logs.go:123] Gathering logs for kube-apiserver [857a1390fd04] ...
	I0819 04:21:22.419671    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857a1390fd04"
	I0819 04:21:22.434663    4093 logs.go:123] Gathering logs for coredns [7bd1561a8a6f] ...
	I0819 04:21:22.434673    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bd1561a8a6f"
	I0819 04:21:22.445853    4093 logs.go:123] Gathering logs for storage-provisioner [3e4479afe33e] ...
	I0819 04:21:22.445863    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e4479afe33e"
	I0819 04:21:22.460496    4093 logs.go:123] Gathering logs for storage-provisioner [343dec6784e0] ...
	I0819 04:21:22.460508    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 343dec6784e0"
	I0819 04:21:22.472494    4093 logs.go:123] Gathering logs for Docker ...
	I0819 04:21:22.472505    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:21:22.494901    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:21:22.494910    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:21:22.494935    4093 out.go:270] X Problems detected in kubelet:
	W0819 04:21:22.494940    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:21:22.494963    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:21:22.494968    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:21:22.494971    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:21:28.761597    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:21:28.761788    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:21:28.777378    3949 logs.go:276] 1 containers: [a0805f9c4c2c]
	I0819 04:21:28.777457    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:21:28.792673    3949 logs.go:276] 1 containers: [8b26c07e9e7f]
	I0819 04:21:28.792751    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:21:28.803921    3949 logs.go:276] 4 containers: [b8387e4e1e6c 76bba5139c4a 161fcc2cac7e 781c45adfd16]
	I0819 04:21:28.803994    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:21:28.816713    3949 logs.go:276] 1 containers: [ae35457314f6]
	I0819 04:21:28.816784    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:21:28.827240    3949 logs.go:276] 1 containers: [6268fe998982]
	I0819 04:21:28.827305    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:21:28.838451    3949 logs.go:276] 1 containers: [0e2a041f6a1c]
	I0819 04:21:28.838517    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:21:28.849286    3949 logs.go:276] 0 containers: []
	W0819 04:21:28.849297    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:21:28.849354    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:21:28.860252    3949 logs.go:276] 1 containers: [ce9e3ca02329]
	I0819 04:21:28.860269    3949 logs.go:123] Gathering logs for coredns [781c45adfd16] ...
	I0819 04:21:28.860274    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 781c45adfd16"
	I0819 04:21:28.873208    3949 logs.go:123] Gathering logs for kube-scheduler [ae35457314f6] ...
	I0819 04:21:28.873220    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae35457314f6"
	I0819 04:21:28.888780    3949 logs.go:123] Gathering logs for kube-controller-manager [0e2a041f6a1c] ...
	I0819 04:21:28.888790    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2a041f6a1c"
	I0819 04:21:28.906616    3949 logs.go:123] Gathering logs for coredns [b8387e4e1e6c] ...
	I0819 04:21:28.906626    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8387e4e1e6c"
	I0819 04:21:28.918418    3949 logs.go:123] Gathering logs for coredns [161fcc2cac7e] ...
	I0819 04:21:28.918429    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161fcc2cac7e"
	I0819 04:21:28.930548    3949 logs.go:123] Gathering logs for storage-provisioner [ce9e3ca02329] ...
	I0819 04:21:28.930557    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9e3ca02329"
	I0819 04:21:28.946843    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:21:28.946852    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:21:28.983945    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:21:28.984042    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:21:28.984591    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:21:28.984682    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	I0819 04:21:28.985982    3949 logs.go:123] Gathering logs for kube-proxy [6268fe998982] ...
	I0819 04:21:28.985992    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6268fe998982"
	I0819 04:21:28.998615    3949 logs.go:123] Gathering logs for coredns [76bba5139c4a] ...
	I0819 04:21:28.998627    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bba5139c4a"
	I0819 04:21:29.009822    3949 logs.go:123] Gathering logs for kube-apiserver [a0805f9c4c2c] ...
	I0819 04:21:29.009835    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0805f9c4c2c"
	I0819 04:21:29.024141    3949 logs.go:123] Gathering logs for etcd [8b26c07e9e7f] ...
	I0819 04:21:29.024152    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b26c07e9e7f"
	I0819 04:21:29.039853    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:21:29.039863    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:21:29.064953    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:21:29.064964    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:21:29.076741    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:21:29.076753    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:21:29.081619    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:21:29.081628    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:21:29.116406    3949 out.go:358] Setting ErrFile to fd 2...
	I0819 04:21:29.116416    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:21:29.116445    3949 out.go:270] X Problems detected in kubelet:
	W0819 04:21:29.116451    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:21:29.116454    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:21:29.116458    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:21:29.116461    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	I0819 04:21:29.116465    3949 out.go:358] Setting ErrFile to fd 2...
	I0819 04:21:29.116468    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:21:32.498969    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:21:37.501357    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:21:37.501434    4093 kubeadm.go:597] duration metric: took 4m6.983286959s to restartPrimaryControlPlane
	W0819 04:21:37.501508    4093 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 04:21:37.501546    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0819 04:21:38.514597    4093 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.013045208s)
	I0819 04:21:38.514687    4093 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 04:21:38.519551    4093 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 04:21:38.522372    4093 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 04:21:38.525165    4093 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 04:21:38.525171    4093 kubeadm.go:157] found existing configuration files:
	
	I0819 04:21:38.525193    4093 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50464 /etc/kubernetes/admin.conf
	I0819 04:21:38.528068    4093 kubeadm.go:163] "https://control-plane.minikube.internal:50464" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50464 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 04:21:38.528087    4093 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 04:21:38.530512    4093 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50464 /etc/kubernetes/kubelet.conf
	I0819 04:21:38.533339    4093 kubeadm.go:163] "https://control-plane.minikube.internal:50464" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50464 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 04:21:38.533367    4093 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 04:21:38.536430    4093 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50464 /etc/kubernetes/controller-manager.conf
	I0819 04:21:38.539202    4093 kubeadm.go:163] "https://control-plane.minikube.internal:50464" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50464 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 04:21:38.539223    4093 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 04:21:38.541694    4093 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50464 /etc/kubernetes/scheduler.conf
	I0819 04:21:38.544650    4093 kubeadm.go:163] "https://control-plane.minikube.internal:50464" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50464 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 04:21:38.544673    4093 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 04:21:38.547282    4093 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 04:21:38.562244    4093 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0819 04:21:38.562271    4093 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 04:21:38.613304    4093 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 04:21:38.613391    4093 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 04:21:38.613474    4093 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 04:21:38.662332    4093 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 04:21:38.666592    4093 out.go:235]   - Generating certificates and keys ...
	I0819 04:21:38.666624    4093 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 04:21:38.666663    4093 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 04:21:38.666710    4093 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 04:21:38.666740    4093 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 04:21:38.666773    4093 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 04:21:38.666802    4093 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 04:21:38.666841    4093 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 04:21:38.666880    4093 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 04:21:38.666927    4093 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 04:21:38.666968    4093 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 04:21:38.666990    4093 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 04:21:38.667020    4093 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 04:21:38.816464    4093 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 04:21:38.972449    4093 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 04:21:39.081369    4093 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 04:21:39.227639    4093 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 04:21:39.258109    4093 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 04:21:39.258623    4093 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 04:21:39.258751    4093 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 04:21:39.326946    4093 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 04:21:39.120430    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:21:39.330141    4093 out.go:235]   - Booting up control plane ...
	I0819 04:21:39.330188    4093 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 04:21:39.330235    4093 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 04:21:39.330274    4093 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 04:21:39.330315    4093 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 04:21:39.330392    4093 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 04:21:43.830921    4093 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501285 seconds
	I0819 04:21:43.830979    4093 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 04:21:43.835015    4093 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 04:21:44.342378    4093 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 04:21:44.342552    4093 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-446000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 04:21:44.846670    4093 kubeadm.go:310] [bootstrap-token] Using token: p7y4ix.t1jkzzhb876hyy9j
	I0819 04:21:44.122545    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:21:44.122694    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:21:44.138767    3949 logs.go:276] 1 containers: [a0805f9c4c2c]
	I0819 04:21:44.138857    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:21:44.151882    3949 logs.go:276] 1 containers: [8b26c07e9e7f]
	I0819 04:21:44.151955    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:21:44.162828    3949 logs.go:276] 4 containers: [b8387e4e1e6c 76bba5139c4a 161fcc2cac7e 781c45adfd16]
	I0819 04:21:44.162904    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:21:44.173440    3949 logs.go:276] 1 containers: [ae35457314f6]
	I0819 04:21:44.173511    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:21:44.191772    3949 logs.go:276] 1 containers: [6268fe998982]
	I0819 04:21:44.191842    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:21:44.202324    3949 logs.go:276] 1 containers: [0e2a041f6a1c]
	I0819 04:21:44.202393    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:21:44.212781    3949 logs.go:276] 0 containers: []
	W0819 04:21:44.212792    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:21:44.212849    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:21:44.223626    3949 logs.go:276] 1 containers: [ce9e3ca02329]
	I0819 04:21:44.223643    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:21:44.223649    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:21:44.248846    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:21:44.248856    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:21:44.253351    3949 logs.go:123] Gathering logs for coredns [161fcc2cac7e] ...
	I0819 04:21:44.253361    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161fcc2cac7e"
	I0819 04:21:44.265388    3949 logs.go:123] Gathering logs for kube-proxy [6268fe998982] ...
	I0819 04:21:44.265401    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6268fe998982"
	I0819 04:21:44.277165    3949 logs.go:123] Gathering logs for etcd [8b26c07e9e7f] ...
	I0819 04:21:44.277176    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b26c07e9e7f"
	I0819 04:21:44.291209    3949 logs.go:123] Gathering logs for coredns [b8387e4e1e6c] ...
	I0819 04:21:44.291220    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8387e4e1e6c"
	I0819 04:21:44.303477    3949 logs.go:123] Gathering logs for coredns [781c45adfd16] ...
	I0819 04:21:44.303491    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 781c45adfd16"
	I0819 04:21:44.315104    3949 logs.go:123] Gathering logs for coredns [76bba5139c4a] ...
	I0819 04:21:44.315114    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bba5139c4a"
	I0819 04:21:44.326226    3949 logs.go:123] Gathering logs for kube-scheduler [ae35457314f6] ...
	I0819 04:21:44.326237    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae35457314f6"
	I0819 04:21:44.341458    3949 logs.go:123] Gathering logs for kube-controller-manager [0e2a041f6a1c] ...
	I0819 04:21:44.341472    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2a041f6a1c"
	I0819 04:21:44.359646    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:21:44.359656    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:21:44.371124    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:21:44.371137    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:21:44.408183    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:21:44.408281    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:21:44.408814    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:21:44.408902    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	I0819 04:21:44.410206    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:21:44.410210    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:21:44.445255    3949 logs.go:123] Gathering logs for kube-apiserver [a0805f9c4c2c] ...
	I0819 04:21:44.445266    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0805f9c4c2c"
	I0819 04:21:44.459455    3949 logs.go:123] Gathering logs for storage-provisioner [ce9e3ca02329] ...
	I0819 04:21:44.459468    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9e3ca02329"
	I0819 04:21:44.471518    3949 out.go:358] Setting ErrFile to fd 2...
	I0819 04:21:44.471529    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:21:44.471555    3949 out.go:270] X Problems detected in kubelet:
	W0819 04:21:44.471559    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:21:44.471562    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:21:44.471566    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:21:44.471569    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	I0819 04:21:44.471579    3949 out.go:358] Setting ErrFile to fd 2...
	I0819 04:21:44.471582    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:21:44.849804    4093 out.go:235]   - Configuring RBAC rules ...
	I0819 04:21:44.849870    4093 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 04:21:44.849920    4093 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 04:21:44.854397    4093 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 04:21:44.855501    4093 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 04:21:44.856390    4093 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 04:21:44.857284    4093 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 04:21:44.860784    4093 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 04:21:45.029070    4093 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 04:21:45.250655    4093 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 04:21:45.251108    4093 kubeadm.go:310] 
	I0819 04:21:45.251138    4093 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 04:21:45.251142    4093 kubeadm.go:310] 
	I0819 04:21:45.251177    4093 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 04:21:45.251183    4093 kubeadm.go:310] 
	I0819 04:21:45.251203    4093 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 04:21:45.251237    4093 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 04:21:45.251271    4093 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 04:21:45.251277    4093 kubeadm.go:310] 
	I0819 04:21:45.251310    4093 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 04:21:45.251315    4093 kubeadm.go:310] 
	I0819 04:21:45.251343    4093 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 04:21:45.251346    4093 kubeadm.go:310] 
	I0819 04:21:45.251376    4093 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 04:21:45.251423    4093 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 04:21:45.251463    4093 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 04:21:45.251466    4093 kubeadm.go:310] 
	I0819 04:21:45.251509    4093 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 04:21:45.251554    4093 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 04:21:45.251558    4093 kubeadm.go:310] 
	I0819 04:21:45.251606    4093 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token p7y4ix.t1jkzzhb876hyy9j \
	I0819 04:21:45.251663    4093 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:200cf9aaf4d8090b061170c9280858f68184aa10356c82792dd3b43229196e5e \
	I0819 04:21:45.251676    4093 kubeadm.go:310] 	--control-plane 
	I0819 04:21:45.251681    4093 kubeadm.go:310] 
	I0819 04:21:45.251727    4093 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 04:21:45.251732    4093 kubeadm.go:310] 
	I0819 04:21:45.251780    4093 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token p7y4ix.t1jkzzhb876hyy9j \
	I0819 04:21:45.251828    4093 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:200cf9aaf4d8090b061170c9280858f68184aa10356c82792dd3b43229196e5e 
	I0819 04:21:45.251923    4093 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 04:21:45.251939    4093 cni.go:84] Creating CNI manager for ""
	I0819 04:21:45.251947    4093 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:21:45.254799    4093 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 04:21:45.261749    4093 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 04:21:45.264848    4093 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 04:21:45.269755    4093 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 04:21:45.269807    4093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 04:21:45.269833    4093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-446000 minikube.k8s.io/updated_at=2024_08_19T04_21_45_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=7871dd89d2a8218fd3bbcc542b116f963c0d9934 minikube.k8s.io/name=stopped-upgrade-446000 minikube.k8s.io/primary=true
	I0819 04:21:45.273031    4093 ops.go:34] apiserver oom_adj: -16
	I0819 04:21:45.311019    4093 kubeadm.go:1113] duration metric: took 41.246917ms to wait for elevateKubeSystemPrivileges
	I0819 04:21:45.311035    4093 kubeadm.go:394] duration metric: took 4m14.806381792s to StartCluster
	I0819 04:21:45.311046    4093 settings.go:142] acquiring lock: {Name:mkadddaa5ec690138051e9a9334213fba69e0867 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:21:45.311165    4093 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19476-967/kubeconfig
	I0819 04:21:45.311602    4093 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19476-967/kubeconfig: {Name:mkcc8b27cbda2ef567c4911aa335c1e1951a7d2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:21:45.311831    4093 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:21:45.311869    4093 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 04:21:45.311913    4093 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-446000"
	I0819 04:21:45.311921    4093 config.go:182] Loaded profile config "stopped-upgrade-446000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 04:21:45.311925    4093 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-446000"
	W0819 04:21:45.311928    4093 addons.go:243] addon storage-provisioner should already be in state true
	I0819 04:21:45.311937    4093 host.go:66] Checking if "stopped-upgrade-446000" exists ...
	I0819 04:21:45.311950    4093 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-446000"
	I0819 04:21:45.311963    4093 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-446000"
	I0819 04:21:45.312178    4093 retry.go:31] will retry after 1.410262221s: connect: dial unix /Users/jenkins/minikube-integration/19476-967/.minikube/machines/stopped-upgrade-446000/monitor: connect: connection refused
	I0819 04:21:45.312890    4093 kapi.go:59] client config for stopped-upgrade-446000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19476-967/.minikube/profiles/stopped-upgrade-446000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19476-967/.minikube/profiles/stopped-upgrade-446000/client.key", CAFile:"/Users/jenkins/minikube-integration/19476-967/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102335610), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 04:21:45.313003    4093 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-446000"
	W0819 04:21:45.313008    4093 addons.go:243] addon default-storageclass should already be in state true
	I0819 04:21:45.313016    4093 host.go:66] Checking if "stopped-upgrade-446000" exists ...
	I0819 04:21:45.313528    4093 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 04:21:45.313532    4093 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 04:21:45.313537    4093 sshutil.go:53] new ssh client: &{IP:localhost Port:50429 SSHKeyPath:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/stopped-upgrade-446000/id_rsa Username:docker}
	I0819 04:21:45.315747    4093 out.go:177] * Verifying Kubernetes components...
	I0819 04:21:45.322805    4093 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 04:21:45.391437    4093 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 04:21:45.396536    4093 api_server.go:52] waiting for apiserver process to appear ...
	I0819 04:21:45.396581    4093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 04:21:45.400435    4093 api_server.go:72] duration metric: took 88.595625ms to wait for apiserver process to appear ...
	I0819 04:21:45.400444    4093 api_server.go:88] waiting for apiserver healthz status ...
	I0819 04:21:45.400450    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:21:45.405756    4093 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 04:21:45.729111    4093 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0819 04:21:45.729127    4093 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0819 04:21:46.730287    4093 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 04:21:46.734219    4093 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 04:21:46.734226    4093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 04:21:46.734234    4093 sshutil.go:53] new ssh client: &{IP:localhost Port:50429 SSHKeyPath:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/stopped-upgrade-446000/id_rsa Username:docker}
	I0819 04:21:46.770238    4093 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 04:21:50.401454    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:21:50.401498    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:21:54.475581    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:21:55.402446    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:21:55.402521    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:21:59.477802    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:21:59.477910    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:21:59.490084    3949 logs.go:276] 1 containers: [a0805f9c4c2c]
	I0819 04:21:59.490160    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:21:59.501911    3949 logs.go:276] 1 containers: [8b26c07e9e7f]
	I0819 04:21:59.501995    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:21:59.515965    3949 logs.go:276] 4 containers: [b8387e4e1e6c 76bba5139c4a 161fcc2cac7e 781c45adfd16]
	I0819 04:21:59.516038    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:21:59.527067    3949 logs.go:276] 1 containers: [ae35457314f6]
	I0819 04:21:59.527144    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:21:59.539445    3949 logs.go:276] 1 containers: [6268fe998982]
	I0819 04:21:59.539515    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:21:59.550215    3949 logs.go:276] 1 containers: [0e2a041f6a1c]
	I0819 04:21:59.550290    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:21:59.563937    3949 logs.go:276] 0 containers: []
	W0819 04:21:59.563955    3949 logs.go:278] No container was found matching "kindnet"
	I0819 04:21:59.564020    3949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:21:59.579278    3949 logs.go:276] 1 containers: [ce9e3ca02329]
	I0819 04:21:59.579295    3949 logs.go:123] Gathering logs for coredns [b8387e4e1e6c] ...
	I0819 04:21:59.579302    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8387e4e1e6c"
	I0819 04:21:59.591611    3949 logs.go:123] Gathering logs for kube-proxy [6268fe998982] ...
	I0819 04:21:59.591621    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6268fe998982"
	I0819 04:21:59.606872    3949 logs.go:123] Gathering logs for storage-provisioner [ce9e3ca02329] ...
	I0819 04:21:59.606887    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9e3ca02329"
	I0819 04:21:59.637010    3949 logs.go:123] Gathering logs for Docker ...
	I0819 04:21:59.637022    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:21:59.661381    3949 logs.go:123] Gathering logs for kubelet ...
	I0819 04:21:59.661395    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:21:59.697531    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:21:59.697626    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:21:59.698160    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:21:59.698247    3949 logs.go:138] Found kubelet problem: Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	I0819 04:21:59.699521    3949 logs.go:123] Gathering logs for kube-apiserver [a0805f9c4c2c] ...
	I0819 04:21:59.699531    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0805f9c4c2c"
	I0819 04:21:59.714085    3949 logs.go:123] Gathering logs for kube-scheduler [ae35457314f6] ...
	I0819 04:21:59.714094    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae35457314f6"
	I0819 04:21:59.729351    3949 logs.go:123] Gathering logs for kube-controller-manager [0e2a041f6a1c] ...
	I0819 04:21:59.729363    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2a041f6a1c"
	I0819 04:21:59.747525    3949 logs.go:123] Gathering logs for coredns [76bba5139c4a] ...
	I0819 04:21:59.747536    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bba5139c4a"
	I0819 04:21:59.759537    3949 logs.go:123] Gathering logs for coredns [161fcc2cac7e] ...
	I0819 04:21:59.759552    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161fcc2cac7e"
	I0819 04:21:59.771419    3949 logs.go:123] Gathering logs for container status ...
	I0819 04:21:59.771432    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:21:59.783551    3949 logs.go:123] Gathering logs for dmesg ...
	I0819 04:21:59.783566    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:21:59.788612    3949 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:21:59.788621    3949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:21:59.826528    3949 logs.go:123] Gathering logs for etcd [8b26c07e9e7f] ...
	I0819 04:21:59.826540    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b26c07e9e7f"
	I0819 04:21:59.841153    3949 logs.go:123] Gathering logs for coredns [781c45adfd16] ...
	I0819 04:21:59.841165    3949 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 781c45adfd16"
	I0819 04:21:59.854011    3949 out.go:358] Setting ErrFile to fd 2...
	I0819 04:21:59.854021    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:21:59.854049    3949 out.go:270] X Problems detected in kubelet:
	W0819 04:21:59.854054    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:21:59.854058    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:21:59.854063    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	W0819 04:21:59.854067    3949 out.go:270]   Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	I0819 04:21:59.854070    3949 out.go:358] Setting ErrFile to fd 2...
	I0819 04:21:59.854073    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:22:00.402728    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:22:00.402751    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:22:05.402983    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:22:05.403007    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:22:09.858092    3949 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:22:10.403370    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:22:10.403406    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:22:14.860359    3949 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:22:14.864745    3949 out.go:201] 
	W0819 04:22:14.868836    3949 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0819 04:22:14.868845    3949 out.go:270] * 
	W0819 04:22:14.869413    3949 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:22:14.884743    3949 out.go:201] 
	I0819 04:22:15.403851    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:22:15.403871    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0819 04:22:15.731057    4093 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0819 04:22:15.736748    4093 out.go:177] * Enabled addons: storage-provisioner
	I0819 04:22:15.744745    4093 addons.go:510] duration metric: took 30.433260625s for enable addons: enabled=[storage-provisioner]
	I0819 04:22:20.404706    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:22:20.404745    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:22:25.405903    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:22:25.405951    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-08-19 11:13:10 UTC, ends at Mon 2024-08-19 11:22:30 UTC. --
	Aug 19 11:22:11 running-upgrade-079000 dockerd[3293]: time="2024-08-19T11:22:11.798314999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 11:22:11 running-upgrade-079000 dockerd[3293]: time="2024-08-19T11:22:11.798454027Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/9d76aa94583c9051fb5ca2be44309af68b5d1e3536ae46ebcdd2624dcafa3c73 pid=17503 runtime=io.containerd.runc.v2
	Aug 19 11:22:11 running-upgrade-079000 dockerd[3293]: time="2024-08-19T11:22:11.798599179Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/f6c58525c451f09258a02da4cc89fadc3da83f250ad405eda3ede4ba0d629496 pid=17496 runtime=io.containerd.runc.v2
	Aug 19 11:22:12 running-upgrade-079000 cri-dockerd[3135]: time="2024-08-19T11:22:12Z" level=error msg="ContainerStats resp: {0x4000610880 linux}"
	Aug 19 11:22:13 running-upgrade-079000 cri-dockerd[3135]: time="2024-08-19T11:22:13Z" level=error msg="ContainerStats resp: {0x4000819c00 linux}"
	Aug 19 11:22:13 running-upgrade-079000 cri-dockerd[3135]: time="2024-08-19T11:22:13Z" level=error msg="ContainerStats resp: {0x40004a8700 linux}"
	Aug 19 11:22:13 running-upgrade-079000 cri-dockerd[3135]: time="2024-08-19T11:22:13Z" level=error msg="ContainerStats resp: {0x40004fc740 linux}"
	Aug 19 11:22:13 running-upgrade-079000 cri-dockerd[3135]: time="2024-08-19T11:22:13Z" level=error msg="ContainerStats resp: {0x40004a9000 linux}"
	Aug 19 11:22:13 running-upgrade-079000 cri-dockerd[3135]: time="2024-08-19T11:22:13Z" level=error msg="ContainerStats resp: {0x40004a9140 linux}"
	Aug 19 11:22:13 running-upgrade-079000 cri-dockerd[3135]: time="2024-08-19T11:22:13Z" level=error msg="ContainerStats resp: {0x40004a9280 linux}"
	Aug 19 11:22:13 running-upgrade-079000 cri-dockerd[3135]: time="2024-08-19T11:22:13Z" level=error msg="ContainerStats resp: {0x40004a9bc0 linux}"
	Aug 19 11:22:14 running-upgrade-079000 cri-dockerd[3135]: time="2024-08-19T11:22:14Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 19 11:22:19 running-upgrade-079000 cri-dockerd[3135]: time="2024-08-19T11:22:19Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 19 11:22:23 running-upgrade-079000 cri-dockerd[3135]: time="2024-08-19T11:22:23Z" level=error msg="ContainerStats resp: {0x4000610a00 linux}"
	Aug 19 11:22:23 running-upgrade-079000 cri-dockerd[3135]: time="2024-08-19T11:22:23Z" level=error msg="ContainerStats resp: {0x4000611600 linux}"
	Aug 19 11:22:24 running-upgrade-079000 cri-dockerd[3135]: time="2024-08-19T11:22:24Z" level=error msg="ContainerStats resp: {0x40004a8040 linux}"
	Aug 19 11:22:24 running-upgrade-079000 cri-dockerd[3135]: time="2024-08-19T11:22:24Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 19 11:22:25 running-upgrade-079000 cri-dockerd[3135]: time="2024-08-19T11:22:25Z" level=error msg="ContainerStats resp: {0x40004a9100 linux}"
	Aug 19 11:22:25 running-upgrade-079000 cri-dockerd[3135]: time="2024-08-19T11:22:25Z" level=error msg="ContainerStats resp: {0x400091f600 linux}"
	Aug 19 11:22:25 running-upgrade-079000 cri-dockerd[3135]: time="2024-08-19T11:22:25Z" level=error msg="ContainerStats resp: {0x400091fa40 linux}"
	Aug 19 11:22:25 running-upgrade-079000 cri-dockerd[3135]: time="2024-08-19T11:22:25Z" level=error msg="ContainerStats resp: {0x400091fe40 linux}"
	Aug 19 11:22:25 running-upgrade-079000 cri-dockerd[3135]: time="2024-08-19T11:22:25Z" level=error msg="ContainerStats resp: {0x40003a12c0 linux}"
	Aug 19 11:22:25 running-upgrade-079000 cri-dockerd[3135]: time="2024-08-19T11:22:25Z" level=error msg="ContainerStats resp: {0x4000818740 linux}"
	Aug 19 11:22:25 running-upgrade-079000 cri-dockerd[3135]: time="2024-08-19T11:22:25Z" level=error msg="ContainerStats resp: {0x40003a1b00 linux}"
	Aug 19 11:22:29 running-upgrade-079000 cri-dockerd[3135]: time="2024-08-19T11:22:29Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	9d76aa94583c9       edaa71f2aee88       19 seconds ago      Running             coredns                   2                   756a0d663a935
	f6c58525c451f       edaa71f2aee88       19 seconds ago      Running             coredns                   2                   38195840258bf
	b8387e4e1e6ca       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   756a0d663a935
	76bba5139c4af       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   38195840258bf
	6268fe9989824       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   ef7139c585d2c
	ce9e3ca023293       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   71f198f8345dc
	ae35457314f65       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   9b3e580fee428
	0e2a041f6a1c6       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   c6040fa49288a
	a0805f9c4c2ce       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   4683e52a99f71
	8b26c07e9e7f1       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   c6f1e408a920b
	
	
	==> coredns [76bba5139c4a] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8928138140592118619.8815724303182731608. HINFO: read udp 10.244.0.3:38362->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8928138140592118619.8815724303182731608. HINFO: read udp 10.244.0.3:57178->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8928138140592118619.8815724303182731608. HINFO: read udp 10.244.0.3:41405->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8928138140592118619.8815724303182731608. HINFO: read udp 10.244.0.3:37098->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8928138140592118619.8815724303182731608. HINFO: read udp 10.244.0.3:36410->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8928138140592118619.8815724303182731608. HINFO: read udp 10.244.0.3:51373->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8928138140592118619.8815724303182731608. HINFO: read udp 10.244.0.3:58427->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8928138140592118619.8815724303182731608. HINFO: read udp 10.244.0.3:35948->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8928138140592118619.8815724303182731608. HINFO: read udp 10.244.0.3:54724->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8928138140592118619.8815724303182731608. HINFO: read udp 10.244.0.3:60758->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9d76aa94583c] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 4724318554230486118.5809958437914558171. HINFO: read udp 10.244.0.2:54462->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4724318554230486118.5809958437914558171. HINFO: read udp 10.244.0.2:42136->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4724318554230486118.5809958437914558171. HINFO: read udp 10.244.0.2:40238->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4724318554230486118.5809958437914558171. HINFO: read udp 10.244.0.2:46838->10.0.2.3:53: i/o timeout
	
	
	==> coredns [b8387e4e1e6c] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2919411591989810686.7208978879678184721. HINFO: read udp 10.244.0.2:60566->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2919411591989810686.7208978879678184721. HINFO: read udp 10.244.0.2:58668->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2919411591989810686.7208978879678184721. HINFO: read udp 10.244.0.2:58531->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2919411591989810686.7208978879678184721. HINFO: read udp 10.244.0.2:50163->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2919411591989810686.7208978879678184721. HINFO: read udp 10.244.0.2:49420->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2919411591989810686.7208978879678184721. HINFO: read udp 10.244.0.2:56728->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2919411591989810686.7208978879678184721. HINFO: read udp 10.244.0.2:51339->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2919411591989810686.7208978879678184721. HINFO: read udp 10.244.0.2:60338->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2919411591989810686.7208978879678184721. HINFO: read udp 10.244.0.2:33322->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2919411591989810686.7208978879678184721. HINFO: read udp 10.244.0.2:53130->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f6c58525c451] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7803789295532173959.4463696551690275790. HINFO: read udp 10.244.0.3:46616->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7803789295532173959.4463696551690275790. HINFO: read udp 10.244.0.3:51740->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7803789295532173959.4463696551690275790. HINFO: read udp 10.244.0.3:57005->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7803789295532173959.4463696551690275790. HINFO: read udp 10.244.0.3:59528->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-079000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-079000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7871dd89d2a8218fd3bbcc542b116f963c0d9934
	                    minikube.k8s.io/name=running-upgrade-079000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T04_18_10_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 11:18:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-079000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 11:22:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 11:18:10 +0000   Mon, 19 Aug 2024 11:18:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 11:18:10 +0000   Mon, 19 Aug 2024 11:18:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 11:18:10 +0000   Mon, 19 Aug 2024 11:18:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 11:18:10 +0000   Mon, 19 Aug 2024 11:18:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-079000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 5905161becb345dca52a8749cd0603fe
	  System UUID:                5905161becb345dca52a8749cd0603fe
	  Boot ID:                    11e24438-54cc-41ef-abf1-14af3b3777a5
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-t8mng                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m8s
	  kube-system                 coredns-6d4b75cb6d-xgdmj                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m8s
	  kube-system                 etcd-running-upgrade-079000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m21s
	  kube-system                 kube-apiserver-running-upgrade-079000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 kube-controller-manager-running-upgrade-079000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 kube-proxy-f2hsd                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-scheduler-running-upgrade-079000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m5s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m26s (x4 over 4m26s)  kubelet          Node running-upgrade-079000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m26s (x4 over 4m26s)  kubelet          Node running-upgrade-079000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m26s (x4 over 4m26s)  kubelet          Node running-upgrade-079000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  4m21s                  kubelet          Node running-upgrade-079000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  4m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    4m21s                  kubelet          Node running-upgrade-079000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m21s                  kubelet          Node running-upgrade-079000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m21s                  kubelet          Node running-upgrade-079000 status is now: NodeReady
	  Normal  Starting                 4m21s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m9s                   node-controller  Node running-upgrade-079000 event: Registered Node running-upgrade-079000 in Controller
	
	
	==> dmesg <==
	[  +1.793596] systemd-fstab-generator[877]: Ignoring "noauto" for root device
	[  +0.081496] systemd-fstab-generator[888]: Ignoring "noauto" for root device
	[  +0.077008] systemd-fstab-generator[899]: Ignoring "noauto" for root device
	[  +1.141956] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.086267] systemd-fstab-generator[1048]: Ignoring "noauto" for root device
	[  +0.061368] systemd-fstab-generator[1059]: Ignoring "noauto" for root device
	[  +2.523984] systemd-fstab-generator[1288]: Ignoring "noauto" for root device
	[ +21.133529] systemd-fstab-generator[2018]: Ignoring "noauto" for root device
	[  +2.469594] systemd-fstab-generator[2292]: Ignoring "noauto" for root device
	[  +0.142752] systemd-fstab-generator[2327]: Ignoring "noauto" for root device
	[  +0.097645] systemd-fstab-generator[2339]: Ignoring "noauto" for root device
	[  +0.094112] systemd-fstab-generator[2354]: Ignoring "noauto" for root device
	[  +2.582376] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.200070] systemd-fstab-generator[3090]: Ignoring "noauto" for root device
	[  +0.083023] systemd-fstab-generator[3103]: Ignoring "noauto" for root device
	[  +0.081159] systemd-fstab-generator[3114]: Ignoring "noauto" for root device
	[  +0.100162] systemd-fstab-generator[3128]: Ignoring "noauto" for root device
	[  +2.237674] systemd-fstab-generator[3280]: Ignoring "noauto" for root device
	[  +2.242660] systemd-fstab-generator[3637]: Ignoring "noauto" for root device
	[  +1.127454] systemd-fstab-generator[3780]: Ignoring "noauto" for root device
	[Aug19 11:14] kauditd_printk_skb: 68 callbacks suppressed
	[ +39.532531] kauditd_printk_skb: 21 callbacks suppressed
	[Aug19 11:18] systemd-fstab-generator[11968]: Ignoring "noauto" for root device
	[  +5.141264] systemd-fstab-generator[12567]: Ignoring "noauto" for root device
	[  +0.460535] systemd-fstab-generator[12698]: Ignoring "noauto" for root device
	
	
	==> etcd [8b26c07e9e7f] <==
	{"level":"info","ts":"2024-08-19T11:18:06.059Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-08-19T11:18:06.059Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-08-19T11:18:06.062Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-19T11:18:06.062Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-19T11:18:06.062Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-19T11:18:06.062Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-08-19T11:18:06.062Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-08-19T11:18:06.356Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-19T11:18:06.356Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-19T11:18:06.356Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-08-19T11:18:06.356Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-08-19T11:18:06.356Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-08-19T11:18:06.356Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-08-19T11:18:06.356Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-08-19T11:18:06.356Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T11:18:06.360Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T11:18:06.360Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T11:18:06.360Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T11:18:06.360Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-079000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-19T11:18:06.360Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T11:18:06.360Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T11:18:06.361Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-08-19T11:18:06.361Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T11:18:06.361Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T11:18:06.362Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 11:22:31 up 9 min,  0 users,  load average: 0.26, 0.22, 0.12
	Linux running-upgrade-079000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [a0805f9c4c2c] <==
	I0819 11:18:07.753143       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0819 11:18:07.753264       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0819 11:18:07.755261       1 cache.go:39] Caches are synced for autoregister controller
	I0819 11:18:07.756300       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0819 11:18:07.757015       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0819 11:18:07.766892       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0819 11:18:07.785073       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0819 11:18:08.493908       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0819 11:18:08.660190       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0819 11:18:08.665203       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0819 11:18:08.665569       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0819 11:18:08.807909       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0819 11:18:08.818573       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0819 11:18:08.840894       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0819 11:18:08.842812       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0819 11:18:08.843194       1 controller.go:611] quota admission added evaluator for: endpoints
	I0819 11:18:08.844586       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0819 11:18:09.807198       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0819 11:18:10.170298       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0819 11:18:10.173405       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0819 11:18:10.181802       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0819 11:18:10.226698       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0819 11:18:23.313322       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0819 11:18:23.411593       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0819 11:18:25.393347       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [0e2a041f6a1c] <==
	I0819 11:18:22.557454       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0819 11:18:22.557486       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0819 11:18:22.560325       1 shared_informer.go:262] Caches are synced for ephemeral
	I0819 11:18:22.581704       1 shared_informer.go:262] Caches are synced for expand
	I0819 11:18:22.589900       1 shared_informer.go:262] Caches are synced for persistent volume
	I0819 11:18:22.599404       1 shared_informer.go:262] Caches are synced for PVC protection
	I0819 11:18:22.606552       1 shared_informer.go:262] Caches are synced for stateful set
	I0819 11:18:22.607666       1 shared_informer.go:262] Caches are synced for crt configmap
	I0819 11:18:22.610544       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0819 11:18:22.661601       1 shared_informer.go:262] Caches are synced for resource quota
	I0819 11:18:22.661620       1 shared_informer.go:262] Caches are synced for resource quota
	I0819 11:18:22.708130       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0819 11:18:22.708167       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0819 11:18:22.708176       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0819 11:18:22.708177       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0819 11:18:22.725345       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0819 11:18:22.771337       1 shared_informer.go:255] Waiting for caches to sync for garbage collector
	I0819 11:18:22.808941       1 shared_informer.go:262] Caches are synced for attach detach
	I0819 11:18:23.172190       1 shared_informer.go:262] Caches are synced for garbage collector
	I0819 11:18:23.260377       1 shared_informer.go:262] Caches are synced for garbage collector
	I0819 11:18:23.260448       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0819 11:18:23.316146       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0819 11:18:23.414135       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-f2hsd"
	I0819 11:18:23.663104       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-xgdmj"
	I0819 11:18:23.666717       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-t8mng"
	
	
	==> kube-proxy [6268fe998982] <==
	I0819 11:18:25.381347       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0819 11:18:25.381372       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0819 11:18:25.381383       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0819 11:18:25.391676       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0819 11:18:25.391687       1 server_others.go:206] "Using iptables Proxier"
	I0819 11:18:25.391701       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0819 11:18:25.391787       1 server.go:661] "Version info" version="v1.24.1"
	I0819 11:18:25.391791       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 11:18:25.392003       1 config.go:317] "Starting service config controller"
	I0819 11:18:25.392009       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0819 11:18:25.392016       1 config.go:226] "Starting endpoint slice config controller"
	I0819 11:18:25.392018       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0819 11:18:25.392356       1 config.go:444] "Starting node config controller"
	I0819 11:18:25.392359       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0819 11:18:25.492438       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0819 11:18:25.492514       1 shared_informer.go:262] Caches are synced for service config
	I0819 11:18:25.492799       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [ae35457314f6] <==
	W0819 11:18:07.733492       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 11:18:07.734040       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0819 11:18:07.733510       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0819 11:18:07.734074       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0819 11:18:07.733525       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 11:18:07.734108       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0819 11:18:07.733543       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 11:18:07.734112       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0819 11:18:07.733559       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 11:18:07.734116       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0819 11:18:07.733573       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 11:18:07.734120       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0819 11:18:07.733585       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 11:18:07.734125       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0819 11:18:07.733601       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0819 11:18:07.734130       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0819 11:18:07.733613       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 11:18:07.734134       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0819 11:18:08.573902       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 11:18:08.573977       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0819 11:18:08.641494       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 11:18:08.642058       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0819 11:18:08.682631       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 11:18:08.682796       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0819 11:18:11.630411       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-08-19 11:13:10 UTC, ends at Mon 2024-08-19 11:22:31 UTC. --
	Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.293443   12573 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/747ff7e4-d84b-4972-886f-62b18c893317-kube-api-access-kt7p5 podName:747ff7e4-d84b-4972-886f-62b18c893317 nodeName:}" failed. No retries permitted until 2024-08-19 11:18:24.293433685 +0000 UTC m=+14.137147736 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-kt7p5" (UniqueName: "kubernetes.io/projected/747ff7e4-d84b-4972-886f-62b18c893317-kube-api-access-kt7p5") pod "storage-provisioner" (UID: "747ff7e4-d84b-4972-886f-62b18c893317") : configmap "kube-root-ca.crt" not found
	Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: I0819 11:18:23.416710   12573 topology_manager.go:200] "Topology Admit Handler"
	Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.418236   12573 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.418259   12573 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: I0819 11:18:23.603943   12573 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/593abb49-f5e9-45ea-81c8-70f0dae45c63-kube-proxy\") pod \"kube-proxy-f2hsd\" (UID: \"593abb49-f5e9-45ea-81c8-70f0dae45c63\") " pod="kube-system/kube-proxy-f2hsd"
	Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: I0819 11:18:23.603970   12573 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/593abb49-f5e9-45ea-81c8-70f0dae45c63-xtables-lock\") pod \"kube-proxy-f2hsd\" (UID: \"593abb49-f5e9-45ea-81c8-70f0dae45c63\") " pod="kube-system/kube-proxy-f2hsd"
	Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: I0819 11:18:23.603983   12573 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/593abb49-f5e9-45ea-81c8-70f0dae45c63-lib-modules\") pod \"kube-proxy-f2hsd\" (UID: \"593abb49-f5e9-45ea-81c8-70f0dae45c63\") " pod="kube-system/kube-proxy-f2hsd"
	Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: I0819 11:18:23.604005   12573 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmdq2\" (UniqueName: \"kubernetes.io/projected/593abb49-f5e9-45ea-81c8-70f0dae45c63-kube-api-access-kmdq2\") pod \"kube-proxy-f2hsd\" (UID: \"593abb49-f5e9-45ea-81c8-70f0dae45c63\") " pod="kube-system/kube-proxy-f2hsd"
	Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: I0819 11:18:23.666597   12573 topology_manager.go:200] "Topology Admit Handler"
	Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: W0819 11:18:23.668472   12573 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: E0819 11:18:23.668493   12573 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-079000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-079000' and this object
	Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: I0819 11:18:23.671413   12573 topology_manager.go:200] "Topology Admit Handler"
	Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: I0819 11:18:23.806076   12573 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/09b8aead-ec16-45da-b0bd-924b5dad8081-config-volume\") pod \"coredns-6d4b75cb6d-xgdmj\" (UID: \"09b8aead-ec16-45da-b0bd-924b5dad8081\") " pod="kube-system/coredns-6d4b75cb6d-xgdmj"
	Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: I0819 11:18:23.806107   12573 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9s4c\" (UniqueName: \"kubernetes.io/projected/e4193373-25e1-43c0-8383-ccf36adf9b3e-kube-api-access-x9s4c\") pod \"coredns-6d4b75cb6d-t8mng\" (UID: \"e4193373-25e1-43c0-8383-ccf36adf9b3e\") " pod="kube-system/coredns-6d4b75cb6d-t8mng"
	Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: I0819 11:18:23.806126   12573 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42x89\" (UniqueName: \"kubernetes.io/projected/09b8aead-ec16-45da-b0bd-924b5dad8081-kube-api-access-42x89\") pod \"coredns-6d4b75cb6d-xgdmj\" (UID: \"09b8aead-ec16-45da-b0bd-924b5dad8081\") " pod="kube-system/coredns-6d4b75cb6d-xgdmj"
	Aug 19 11:18:23 running-upgrade-079000 kubelet[12573]: I0819 11:18:23.806149   12573 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e4193373-25e1-43c0-8383-ccf36adf9b3e-config-volume\") pod \"coredns-6d4b75cb6d-t8mng\" (UID: \"e4193373-25e1-43c0-8383-ccf36adf9b3e\") " pod="kube-system/coredns-6d4b75cb6d-t8mng"
	Aug 19 11:18:24 running-upgrade-079000 kubelet[12573]: E0819 11:18:24.705665   12573 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Aug 19 11:18:24 running-upgrade-079000 kubelet[12573]: E0819 11:18:24.705721   12573 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/593abb49-f5e9-45ea-81c8-70f0dae45c63-kube-proxy podName:593abb49-f5e9-45ea-81c8-70f0dae45c63 nodeName:}" failed. No retries permitted until 2024-08-19 11:18:25.205709534 +0000 UTC m=+15.049423544 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/593abb49-f5e9-45ea-81c8-70f0dae45c63-kube-proxy") pod "kube-proxy-f2hsd" (UID: "593abb49-f5e9-45ea-81c8-70f0dae45c63") : failed to sync configmap cache: timed out waiting for the condition
	Aug 19 11:18:24 running-upgrade-079000 kubelet[12573]: E0819 11:18:24.907499   12573 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Aug 19 11:18:24 running-upgrade-079000 kubelet[12573]: E0819 11:18:24.907544   12573 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/e4193373-25e1-43c0-8383-ccf36adf9b3e-config-volume podName:e4193373-25e1-43c0-8383-ccf36adf9b3e nodeName:}" failed. No retries permitted until 2024-08-19 11:18:25.407531137 +0000 UTC m=+15.251245189 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e4193373-25e1-43c0-8383-ccf36adf9b3e-config-volume") pod "coredns-6d4b75cb6d-t8mng" (UID: "e4193373-25e1-43c0-8383-ccf36adf9b3e") : failed to sync configmap cache: timed out waiting for the condition
	Aug 19 11:18:24 running-upgrade-079000 kubelet[12573]: E0819 11:18:24.907499   12573 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Aug 19 11:18:24 running-upgrade-079000 kubelet[12573]: E0819 11:18:24.907644   12573 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/09b8aead-ec16-45da-b0bd-924b5dad8081-config-volume podName:09b8aead-ec16-45da-b0bd-924b5dad8081 nodeName:}" failed. No retries permitted until 2024-08-19 11:18:25.407639717 +0000 UTC m=+15.251353769 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/09b8aead-ec16-45da-b0bd-924b5dad8081-config-volume") pod "coredns-6d4b75cb6d-xgdmj" (UID: "09b8aead-ec16-45da-b0bd-924b5dad8081") : failed to sync configmap cache: timed out waiting for the condition
	Aug 19 11:18:25 running-upgrade-079000 kubelet[12573]: I0819 11:18:25.294123   12573 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="ef7139c585d2c838201eed23ea81e31ca2046f7a43e0b89e168bac4fe1fa9d6d"
	Aug 19 11:22:12 running-upgrade-079000 kubelet[12573]: I0819 11:22:12.680006   12573 scope.go:110] "RemoveContainer" containerID="161fcc2cac7e5f6708a6156652711f9c93cfe7588c3b504bf3cc768051ebcc0d"
	Aug 19 11:22:12 running-upgrade-079000 kubelet[12573]: I0819 11:22:12.690233   12573 scope.go:110] "RemoveContainer" containerID="781c45adfd162f202f83a32c858c9b90e689304c537ae3fbd04ebbbdbca427fe"
	
	
	==> storage-provisioner [ce9e3ca02329] <==
	I0819 11:18:24.512302       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 11:18:24.517408       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 11:18:24.517424       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 11:18:24.522020       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 11:18:24.522157       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"35e02196-c476-4cbd-b5db-ca00672f3bc2", APIVersion:"v1", ResourceVersion:"363", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-079000_f85e6918-84e1-417d-bbe4-6ace8e55b871 became leader
	I0819 11:18:24.522175       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-079000_f85e6918-84e1-417d-bbe4-6ace8e55b871!
	I0819 11:18:24.623874       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-079000_f85e6918-84e1-417d-bbe4-6ace8e55b871!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-079000 -n running-upgrade-079000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-079000 -n running-upgrade-079000: exit status 2 (15.663678958s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-079000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-079000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-079000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-arm64 delete -p running-upgrade-079000: (1.332268416s)
--- FAIL: TestRunningBinaryUpgrade (607.28s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.63s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-262000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-262000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.841245833s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-262000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-262000" primary control-plane node in "kubernetes-upgrade-262000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-262000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:15:41.131437    4014 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:15:41.131601    4014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:15:41.131605    4014 out.go:358] Setting ErrFile to fd 2...
	I0819 04:15:41.131607    4014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:15:41.131733    4014 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:15:41.133019    4014 out.go:352] Setting JSON to false
	I0819 04:15:41.151467    4014 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2704,"bootTime":1724063437,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0819 04:15:41.151556    4014 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:15:41.154456    4014 out.go:177] * [kubernetes-upgrade-262000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:15:41.162224    4014 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 04:15:41.162378    4014 notify.go:220] Checking for updates...
	I0819 04:15:41.168149    4014 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	I0819 04:15:41.171177    4014 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:15:41.174084    4014 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:15:41.177127    4014 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	I0819 04:15:41.180252    4014 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:15:41.183514    4014 config.go:182] Loaded profile config "multinode-837000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:15:41.183581    4014 config.go:182] Loaded profile config "running-upgrade-079000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 04:15:41.183629    4014 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:15:41.188172    4014 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 04:15:41.194166    4014 start.go:297] selected driver: qemu2
	I0819 04:15:41.194175    4014 start.go:901] validating driver "qemu2" against <nil>
	I0819 04:15:41.194182    4014 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:15:41.196518    4014 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 04:15:41.199127    4014 out.go:177] * Automatically selected the socket_vmnet network
	I0819 04:15:41.202214    4014 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 04:15:41.202256    4014 cni.go:84] Creating CNI manager for ""
	I0819 04:15:41.202265    4014 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0819 04:15:41.202308    4014 start.go:340] cluster config:
	{Name:kubernetes-upgrade-262000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-262000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:15:41.205736    4014 iso.go:125] acquiring lock: {Name:mk9bbf20f477d4c64990a7e4e7281f35cf7cfcc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:15:41.213156    4014 out.go:177] * Starting "kubernetes-upgrade-262000" primary control-plane node in "kubernetes-upgrade-262000" cluster
	I0819 04:15:41.217144    4014 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0819 04:15:41.217159    4014 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0819 04:15:41.217168    4014 cache.go:56] Caching tarball of preloaded images
	I0819 04:15:41.217227    4014 preload.go:172] Found /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:15:41.217232    4014 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0819 04:15:41.217293    4014 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/kubernetes-upgrade-262000/config.json ...
	I0819 04:15:41.217303    4014 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/kubernetes-upgrade-262000/config.json: {Name:mkc1b502db1c72502259c749544bfab3e7858f48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:15:41.217658    4014 start.go:360] acquireMachinesLock for kubernetes-upgrade-262000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:15:41.217697    4014 start.go:364] duration metric: took 30.75µs to acquireMachinesLock for "kubernetes-upgrade-262000"
	I0819 04:15:41.217710    4014 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-262000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-262000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:15:41.217738    4014 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:15:41.225121    4014 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 04:15:41.240298    4014 start.go:159] libmachine.API.Create for "kubernetes-upgrade-262000" (driver="qemu2")
	I0819 04:15:41.240322    4014 client.go:168] LocalClient.Create starting
	I0819 04:15:41.240395    4014 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem
	I0819 04:15:41.240430    4014 main.go:141] libmachine: Decoding PEM data...
	I0819 04:15:41.240446    4014 main.go:141] libmachine: Parsing certificate...
	I0819 04:15:41.240488    4014 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/cert.pem
	I0819 04:15:41.240514    4014 main.go:141] libmachine: Decoding PEM data...
	I0819 04:15:41.240522    4014 main.go:141] libmachine: Parsing certificate...
	I0819 04:15:41.240862    4014 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19476-967/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19476-967/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:15:41.390556    4014 main.go:141] libmachine: Creating SSH key...
	I0819 04:15:41.488596    4014 main.go:141] libmachine: Creating Disk image...
	I0819 04:15:41.488603    4014 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:15:41.488802    4014 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kubernetes-upgrade-262000/disk.qcow2.raw /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kubernetes-upgrade-262000/disk.qcow2
	I0819 04:15:41.498189    4014 main.go:141] libmachine: STDOUT: 
	I0819 04:15:41.498207    4014 main.go:141] libmachine: STDERR: 
	I0819 04:15:41.498266    4014 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kubernetes-upgrade-262000/disk.qcow2 +20000M
	I0819 04:15:41.506469    4014 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:15:41.506485    4014 main.go:141] libmachine: STDERR: 
	I0819 04:15:41.506498    4014 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kubernetes-upgrade-262000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kubernetes-upgrade-262000/disk.qcow2
	I0819 04:15:41.506504    4014 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:15:41.506515    4014 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:15:41.506543    4014 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kubernetes-upgrade-262000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/kubernetes-upgrade-262000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kubernetes-upgrade-262000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:2a:a9:42:2c:80 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kubernetes-upgrade-262000/disk.qcow2
	I0819 04:15:41.508156    4014 main.go:141] libmachine: STDOUT: 
	I0819 04:15:41.508171    4014 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:15:41.508193    4014 client.go:171] duration metric: took 267.869791ms to LocalClient.Create
	I0819 04:15:43.510446    4014 start.go:128] duration metric: took 2.292631708s to createHost
	I0819 04:15:43.510520    4014 start.go:83] releasing machines lock for "kubernetes-upgrade-262000", held for 2.29284375s
	W0819 04:15:43.510572    4014 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:15:43.520887    4014 out.go:177] * Deleting "kubernetes-upgrade-262000" in qemu2 ...
	W0819 04:15:43.551840    4014 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:15:43.551869    4014 start.go:729] Will try again in 5 seconds ...
	I0819 04:15:48.554018    4014 start.go:360] acquireMachinesLock for kubernetes-upgrade-262000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:15:48.554624    4014 start.go:364] duration metric: took 501.208µs to acquireMachinesLock for "kubernetes-upgrade-262000"
	I0819 04:15:48.554814    4014 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-262000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-262000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:15:48.555078    4014 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:15:48.563455    4014 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 04:15:48.614826    4014 start.go:159] libmachine.API.Create for "kubernetes-upgrade-262000" (driver="qemu2")
	I0819 04:15:48.614878    4014 client.go:168] LocalClient.Create starting
	I0819 04:15:48.615002    4014 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem
	I0819 04:15:48.615072    4014 main.go:141] libmachine: Decoding PEM data...
	I0819 04:15:48.615087    4014 main.go:141] libmachine: Parsing certificate...
	I0819 04:15:48.615156    4014 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/cert.pem
	I0819 04:15:48.615200    4014 main.go:141] libmachine: Decoding PEM data...
	I0819 04:15:48.615211    4014 main.go:141] libmachine: Parsing certificate...
	I0819 04:15:48.615772    4014 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19476-967/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19476-967/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:15:48.774850    4014 main.go:141] libmachine: Creating SSH key...
	I0819 04:15:48.875937    4014 main.go:141] libmachine: Creating Disk image...
	I0819 04:15:48.875951    4014 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:15:48.876173    4014 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kubernetes-upgrade-262000/disk.qcow2.raw /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kubernetes-upgrade-262000/disk.qcow2
	I0819 04:15:48.886507    4014 main.go:141] libmachine: STDOUT: 
	I0819 04:15:48.886529    4014 main.go:141] libmachine: STDERR: 
	I0819 04:15:48.886597    4014 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kubernetes-upgrade-262000/disk.qcow2 +20000M
	I0819 04:15:48.895868    4014 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:15:48.895900    4014 main.go:141] libmachine: STDERR: 
	I0819 04:15:48.895916    4014 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kubernetes-upgrade-262000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kubernetes-upgrade-262000/disk.qcow2
	I0819 04:15:48.895921    4014 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:15:48.895928    4014 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:15:48.895958    4014 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kubernetes-upgrade-262000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/kubernetes-upgrade-262000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kubernetes-upgrade-262000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:0e:d5:c9:2f:71 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kubernetes-upgrade-262000/disk.qcow2
	I0819 04:15:48.897871    4014 main.go:141] libmachine: STDOUT: 
	I0819 04:15:48.897897    4014 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:15:48.897923    4014 client.go:171] duration metric: took 283.041625ms to LocalClient.Create
	I0819 04:15:50.900110    4014 start.go:128] duration metric: took 2.345017459s to createHost
	I0819 04:15:50.900215    4014 start.go:83] releasing machines lock for "kubernetes-upgrade-262000", held for 2.345579458s
	W0819 04:15:50.900621    4014 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-262000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-262000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:15:50.911253    4014 out.go:201] 
	W0819 04:15:50.915424    4014 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:15:50.915477    4014 out.go:270] * 
	* 
	W0819 04:15:50.918105    4014 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:15:50.931344    4014 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-262000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-262000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-262000: (3.392591875s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-262000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-262000 status --format={{.Host}}: exit status 7 (52.298958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-262000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-262000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.178019542s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-262000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-262000" primary control-plane node in "kubernetes-upgrade-262000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-262000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-262000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:15:54.417926    4052 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:15:54.418057    4052 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:15:54.418062    4052 out.go:358] Setting ErrFile to fd 2...
	I0819 04:15:54.418065    4052 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:15:54.418189    4052 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:15:54.419197    4052 out.go:352] Setting JSON to false
	I0819 04:15:54.435833    4052 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2717,"bootTime":1724063437,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0819 04:15:54.435906    4052 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:15:54.440885    4052 out.go:177] * [kubernetes-upgrade-262000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:15:54.447827    4052 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 04:15:54.447914    4052 notify.go:220] Checking for updates...
	I0819 04:15:54.453841    4052 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	I0819 04:15:54.456797    4052 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:15:54.459857    4052 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:15:54.462859    4052 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	I0819 04:15:54.465780    4052 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:15:54.469049    4052 config.go:182] Loaded profile config "kubernetes-upgrade-262000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0819 04:15:54.469293    4052 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:15:54.473932    4052 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 04:15:54.480844    4052 start.go:297] selected driver: qemu2
	I0819 04:15:54.480849    4052 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-262000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-262000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:15:54.480891    4052 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:15:54.482970    4052 cni.go:84] Creating CNI manager for ""
	I0819 04:15:54.482986    4052 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:15:54.483010    4052 start.go:340] cluster config:
	{Name:kubernetes-upgrade-262000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-262000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:15:54.486475    4052 iso.go:125] acquiring lock: {Name:mk9bbf20f477d4c64990a7e4e7281f35cf7cfcc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:15:54.492849    4052 out.go:177] * Starting "kubernetes-upgrade-262000" primary control-plane node in "kubernetes-upgrade-262000" cluster
	I0819 04:15:54.496801    4052 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:15:54.496814    4052 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:15:54.496822    4052 cache.go:56] Caching tarball of preloaded images
	I0819 04:15:54.496873    4052 preload.go:172] Found /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:15:54.496878    4052 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:15:54.496929    4052 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/kubernetes-upgrade-262000/config.json ...
	I0819 04:15:54.497410    4052 start.go:360] acquireMachinesLock for kubernetes-upgrade-262000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:15:54.497435    4052 start.go:364] duration metric: took 19.792µs to acquireMachinesLock for "kubernetes-upgrade-262000"
	I0819 04:15:54.497444    4052 start.go:96] Skipping create...Using existing machine configuration
	I0819 04:15:54.497450    4052 fix.go:54] fixHost starting: 
	I0819 04:15:54.497561    4052 fix.go:112] recreateIfNeeded on kubernetes-upgrade-262000: state=Stopped err=<nil>
	W0819 04:15:54.497568    4052 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 04:15:54.500823    4052 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-262000" ...
	I0819 04:15:54.508767    4052 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:15:54.508801    4052 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kubernetes-upgrade-262000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/kubernetes-upgrade-262000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kubernetes-upgrade-262000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:0e:d5:c9:2f:71 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kubernetes-upgrade-262000/disk.qcow2
	I0819 04:15:54.510591    4052 main.go:141] libmachine: STDOUT: 
	I0819 04:15:54.510611    4052 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:15:54.510641    4052 fix.go:56] duration metric: took 13.193041ms for fixHost
	I0819 04:15:54.510645    4052 start.go:83] releasing machines lock for "kubernetes-upgrade-262000", held for 13.20625ms
	W0819 04:15:54.510652    4052 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:15:54.510689    4052 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:15:54.510694    4052 start.go:729] Will try again in 5 seconds ...
	I0819 04:15:59.512802    4052 start.go:360] acquireMachinesLock for kubernetes-upgrade-262000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:15:59.513018    4052 start.go:364] duration metric: took 159.958µs to acquireMachinesLock for "kubernetes-upgrade-262000"
	I0819 04:15:59.513058    4052 start.go:96] Skipping create...Using existing machine configuration
	I0819 04:15:59.513067    4052 fix.go:54] fixHost starting: 
	I0819 04:15:59.513513    4052 fix.go:112] recreateIfNeeded on kubernetes-upgrade-262000: state=Stopped err=<nil>
	W0819 04:15:59.513530    4052 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 04:15:59.518993    4052 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-262000" ...
	I0819 04:15:59.526861    4052 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:15:59.527012    4052 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kubernetes-upgrade-262000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/kubernetes-upgrade-262000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kubernetes-upgrade-262000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:0e:d5:c9:2f:71 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kubernetes-upgrade-262000/disk.qcow2
	I0819 04:15:59.533689    4052 main.go:141] libmachine: STDOUT: 
	I0819 04:15:59.533755    4052 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:15:59.533812    4052 fix.go:56] duration metric: took 20.745666ms for fixHost
	I0819 04:15:59.533824    4052 start.go:83] releasing machines lock for "kubernetes-upgrade-262000", held for 20.792042ms
	W0819 04:15:59.533958    4052 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-262000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-262000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:15:59.539184    4052 out.go:201] 
	W0819 04:15:59.542850    4052 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:15:59.542869    4052 out.go:270] * 
	* 
	W0819 04:15:59.544439    4052 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:15:59.553831    4052 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-262000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-262000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-262000 version --output=json: exit status 1 (54.653291ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-262000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-08-19 04:15:59.623058 -0700 PDT m=+2467.842448626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-262000 -n kubernetes-upgrade-262000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-262000 -n kubernetes-upgrade-262000: exit status 7 (32.452ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-262000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-262000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-262000
--- FAIL: TestKubernetesUpgrade (18.63s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.36s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19476
- KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1617394987/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.36s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.15s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19476
- KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2197094676/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.15s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (588.51s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.152714474 start -p stopped-upgrade-446000 --memory=2200 --vm-driver=qemu2 
E0819 04:16:35.888754    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/addons-758000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.152714474 start -p stopped-upgrade-446000 --memory=2200 --vm-driver=qemu2 : (48.141449125s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.152714474 -p stopped-upgrade-446000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.152714474 -p stopped-upgrade-446000 stop: (12.101263542s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-446000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0819 04:18:32.787025    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/addons-758000/client.crt: no such file or directory" logger="UnhandledError"
E0819 04:20:18.680200    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/functional-522000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-446000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m48.177360375s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-446000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-446000" primary control-plane node in "stopped-upgrade-446000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-446000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:17:01.766790    4093 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:17:01.766927    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:17:01.766930    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:17:01.766933    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:17:01.767114    4093 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:17:01.768270    4093 out.go:352] Setting JSON to false
	I0819 04:17:01.785825    4093 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2784,"bootTime":1724063437,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0819 04:17:01.785908    4093 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:17:01.789201    4093 out.go:177] * [stopped-upgrade-446000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:17:01.797076    4093 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 04:17:01.797133    4093 notify.go:220] Checking for updates...
	I0819 04:17:01.804994    4093 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	I0819 04:17:01.808094    4093 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:17:01.811134    4093 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:17:01.812525    4093 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	I0819 04:17:01.816062    4093 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:17:01.819328    4093 config.go:182] Loaded profile config "stopped-upgrade-446000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 04:17:01.823147    4093 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0819 04:17:01.826066    4093 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:17:01.829068    4093 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 04:17:01.836088    4093 start.go:297] selected driver: qemu2
	I0819 04:17:01.836093    4093 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-446000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50464 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-446000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0819 04:17:01.836163    4093 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:17:01.838866    4093 cni.go:84] Creating CNI manager for ""
	I0819 04:17:01.838885    4093 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:17:01.838914    4093 start.go:340] cluster config:
	{Name:stopped-upgrade-446000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50464 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-446000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0819 04:17:01.838973    4093 iso.go:125] acquiring lock: {Name:mk9bbf20f477d4c64990a7e4e7281f35cf7cfcc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:17:01.847024    4093 out.go:177] * Starting "stopped-upgrade-446000" primary control-plane node in "stopped-upgrade-446000" cluster
	I0819 04:17:01.851089    4093 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0819 04:17:01.851107    4093 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0819 04:17:01.851116    4093 cache.go:56] Caching tarball of preloaded images
	I0819 04:17:01.851180    4093 preload.go:172] Found /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:17:01.851186    4093 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0819 04:17:01.851254    4093 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/stopped-upgrade-446000/config.json ...
	I0819 04:17:01.851712    4093 start.go:360] acquireMachinesLock for stopped-upgrade-446000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:17:01.851748    4093 start.go:364] duration metric: took 29.583µs to acquireMachinesLock for "stopped-upgrade-446000"
	I0819 04:17:01.851758    4093 start.go:96] Skipping create...Using existing machine configuration
	I0819 04:17:01.851763    4093 fix.go:54] fixHost starting: 
	I0819 04:17:01.851879    4093 fix.go:112] recreateIfNeeded on stopped-upgrade-446000: state=Stopped err=<nil>
	W0819 04:17:01.851888    4093 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 04:17:01.860026    4093 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-446000" ...
	I0819 04:17:01.864115    4093 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:17:01.864200    4093 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/stopped-upgrade-446000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/stopped-upgrade-446000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/stopped-upgrade-446000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50429-:22,hostfwd=tcp::50430-:2376,hostname=stopped-upgrade-446000 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/stopped-upgrade-446000/disk.qcow2
	I0819 04:17:01.910694    4093 main.go:141] libmachine: STDOUT: 
	I0819 04:17:01.910721    4093 main.go:141] libmachine: STDERR: 
	I0819 04:17:01.910726    4093 main.go:141] libmachine: Waiting for VM to start (ssh -p 50429 docker@127.0.0.1)...
	I0819 04:17:21.402328    4093 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/stopped-upgrade-446000/config.json ...
	I0819 04:17:21.402876    4093 machine.go:93] provisionDockerMachine start ...
	I0819 04:17:21.402996    4093 main.go:141] libmachine: Using SSH client type: native
	I0819 04:17:21.403330    4093 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d7c5a0] 0x100d7ee00 <nil>  [] 0s} localhost 50429 <nil> <nil>}
	I0819 04:17:21.403340    4093 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 04:17:21.483712    4093 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 04:17:21.483750    4093 buildroot.go:166] provisioning hostname "stopped-upgrade-446000"
	I0819 04:17:21.483853    4093 main.go:141] libmachine: Using SSH client type: native
	I0819 04:17:21.484101    4093 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d7c5a0] 0x100d7ee00 <nil>  [] 0s} localhost 50429 <nil> <nil>}
	I0819 04:17:21.484112    4093 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-446000 && echo "stopped-upgrade-446000" | sudo tee /etc/hostname
	I0819 04:17:21.566078    4093 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-446000
	
	I0819 04:17:21.566178    4093 main.go:141] libmachine: Using SSH client type: native
	I0819 04:17:21.566373    4093 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d7c5a0] 0x100d7ee00 <nil>  [] 0s} localhost 50429 <nil> <nil>}
	I0819 04:17:21.566388    4093 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-446000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-446000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-446000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 04:17:21.638067    4093 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 04:17:21.638086    4093 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19476-967/.minikube CaCertPath:/Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19476-967/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19476-967/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19476-967/.minikube}
	I0819 04:17:21.638103    4093 buildroot.go:174] setting up certificates
	I0819 04:17:21.638112    4093 provision.go:84] configureAuth start
	I0819 04:17:21.638120    4093 provision.go:143] copyHostCerts
	I0819 04:17:21.638217    4093 exec_runner.go:144] found /Users/jenkins/minikube-integration/19476-967/.minikube/cert.pem, removing ...
	I0819 04:17:21.638280    4093 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19476-967/.minikube/cert.pem
	I0819 04:17:21.638424    4093 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19476-967/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19476-967/.minikube/cert.pem (1123 bytes)
	I0819 04:17:21.638695    4093 exec_runner.go:144] found /Users/jenkins/minikube-integration/19476-967/.minikube/key.pem, removing ...
	I0819 04:17:21.638700    4093 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19476-967/.minikube/key.pem
	I0819 04:17:21.638774    4093 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19476-967/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19476-967/.minikube/key.pem (1675 bytes)
	I0819 04:17:21.638924    4093 exec_runner.go:144] found /Users/jenkins/minikube-integration/19476-967/.minikube/ca.pem, removing ...
	I0819 04:17:21.638929    4093 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19476-967/.minikube/ca.pem
	I0819 04:17:21.639002    4093 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19476-967/.minikube/ca.pem (1078 bytes)
	I0819 04:17:21.639135    4093 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19476-967/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-446000 san=[127.0.0.1 localhost minikube stopped-upgrade-446000]
	I0819 04:17:21.684969    4093 provision.go:177] copyRemoteCerts
	I0819 04:17:21.684998    4093 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 04:17:21.685005    4093 sshutil.go:53] new ssh client: &{IP:localhost Port:50429 SSHKeyPath:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/stopped-upgrade-446000/id_rsa Username:docker}
	I0819 04:17:21.719811    4093 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 04:17:21.726575    4093 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0819 04:17:21.735134    4093 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 04:17:21.741997    4093 provision.go:87] duration metric: took 103.880583ms to configureAuth
	I0819 04:17:21.742006    4093 buildroot.go:189] setting minikube options for container-runtime
	I0819 04:17:21.742132    4093 config.go:182] Loaded profile config "stopped-upgrade-446000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 04:17:21.742164    4093 main.go:141] libmachine: Using SSH client type: native
	I0819 04:17:21.742251    4093 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d7c5a0] 0x100d7ee00 <nil>  [] 0s} localhost 50429 <nil> <nil>}
	I0819 04:17:21.742256    4093 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 04:17:21.805743    4093 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 04:17:21.805753    4093 buildroot.go:70] root file system type: tmpfs
	I0819 04:17:21.805801    4093 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 04:17:21.805892    4093 main.go:141] libmachine: Using SSH client type: native
	I0819 04:17:21.806003    4093 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d7c5a0] 0x100d7ee00 <nil>  [] 0s} localhost 50429 <nil> <nil>}
	I0819 04:17:21.806037    4093 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 04:17:21.873736    4093 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 04:17:21.873790    4093 main.go:141] libmachine: Using SSH client type: native
	I0819 04:17:21.873912    4093 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d7c5a0] 0x100d7ee00 <nil>  [] 0s} localhost 50429 <nil> <nil>}
	I0819 04:17:21.873921    4093 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 04:17:22.218616    4093 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0819 04:17:22.218628    4093 machine.go:96] duration metric: took 815.752125ms to provisionDockerMachine
	I0819 04:17:22.218639    4093 start.go:293] postStartSetup for "stopped-upgrade-446000" (driver="qemu2")
	I0819 04:17:22.218646    4093 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 04:17:22.218725    4093 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 04:17:22.218734    4093 sshutil.go:53] new ssh client: &{IP:localhost Port:50429 SSHKeyPath:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/stopped-upgrade-446000/id_rsa Username:docker}
	I0819 04:17:22.251829    4093 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 04:17:22.253061    4093 info.go:137] Remote host: Buildroot 2021.02.12
	I0819 04:17:22.253069    4093 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19476-967/.minikube/addons for local assets ...
	I0819 04:17:22.253153    4093 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19476-967/.minikube/files for local assets ...
	I0819 04:17:22.253273    4093 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19476-967/.minikube/files/etc/ssl/certs/14342.pem -> 14342.pem in /etc/ssl/certs
	I0819 04:17:22.253395    4093 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 04:17:22.256378    4093 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/files/etc/ssl/certs/14342.pem --> /etc/ssl/certs/14342.pem (1708 bytes)
	I0819 04:17:22.263072    4093 start.go:296] duration metric: took 44.428583ms for postStartSetup
	I0819 04:17:22.263087    4093 fix.go:56] duration metric: took 20.411581833s for fixHost
	I0819 04:17:22.263122    4093 main.go:141] libmachine: Using SSH client type: native
	I0819 04:17:22.263228    4093 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d7c5a0] 0x100d7ee00 <nil>  [] 0s} localhost 50429 <nil> <nil>}
	I0819 04:17:22.263232    4093 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 04:17:22.325534    4093 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724066242.656644629
	
	I0819 04:17:22.325544    4093 fix.go:216] guest clock: 1724066242.656644629
	I0819 04:17:22.325550    4093 fix.go:229] Guest: 2024-08-19 04:17:22.656644629 -0700 PDT Remote: 2024-08-19 04:17:22.263089 -0700 PDT m=+20.521331959 (delta=393.555629ms)
	I0819 04:17:22.325563    4093 fix.go:200] guest clock delta is within tolerance: 393.555629ms
	I0819 04:17:22.325566    4093 start.go:83] releasing machines lock for "stopped-upgrade-446000", held for 20.474071s
	I0819 04:17:22.325627    4093 ssh_runner.go:195] Run: cat /version.json
	I0819 04:17:22.325634    4093 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 04:17:22.325635    4093 sshutil.go:53] new ssh client: &{IP:localhost Port:50429 SSHKeyPath:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/stopped-upgrade-446000/id_rsa Username:docker}
	I0819 04:17:22.325655    4093 sshutil.go:53] new ssh client: &{IP:localhost Port:50429 SSHKeyPath:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/stopped-upgrade-446000/id_rsa Username:docker}
	W0819 04:17:22.326308    4093 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50429: connect: connection refused
	I0819 04:17:22.326335    4093 retry.go:31] will retry after 347.605855ms: dial tcp [::1]:50429: connect: connection refused
	W0819 04:17:22.731456    4093 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0819 04:17:22.731608    4093 ssh_runner.go:195] Run: systemctl --version
	I0819 04:17:22.735633    4093 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 04:17:22.739066    4093 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 04:17:22.739137    4093 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0819 04:17:22.745478    4093 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0819 04:17:22.753915    4093 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 04:17:22.753931    4093 start.go:495] detecting cgroup driver to use...
	I0819 04:17:22.754045    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 04:17:22.763865    4093 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0819 04:17:22.768262    4093 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 04:17:22.772578    4093 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 04:17:22.772609    4093 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 04:17:22.776564    4093 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 04:17:22.780414    4093 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 04:17:22.784019    4093 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 04:17:22.787365    4093 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 04:17:22.790279    4093 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 04:17:22.793239    4093 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 04:17:22.796493    4093 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 04:17:22.799998    4093 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 04:17:22.802871    4093 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 04:17:22.805641    4093 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 04:17:22.864616    4093 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 04:17:22.871007    4093 start.go:495] detecting cgroup driver to use...
	I0819 04:17:22.871074    4093 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 04:17:22.882345    4093 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 04:17:22.887445    4093 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 04:17:22.895905    4093 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 04:17:22.900490    4093 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 04:17:22.905114    4093 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 04:17:22.932539    4093 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 04:17:22.937494    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 04:17:22.942910    4093 ssh_runner.go:195] Run: which cri-dockerd
	I0819 04:17:22.944230    4093 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 04:17:22.947109    4093 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0819 04:17:22.952223    4093 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 04:17:23.018390    4093 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 04:17:23.082508    4093 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 04:17:23.082569    4093 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 04:17:23.087558    4093 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 04:17:23.145294    4093 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 04:17:24.271748    4093 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.126451667s)
	I0819 04:17:24.271805    4093 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0819 04:17:24.280069    4093 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0819 04:17:24.286029    4093 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 04:17:24.290805    4093 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0819 04:17:24.349524    4093 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0819 04:17:24.417619    4093 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 04:17:24.476438    4093 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0819 04:17:24.482294    4093 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 04:17:24.486692    4093 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 04:17:24.575800    4093 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0819 04:17:24.619211    4093 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0819 04:17:24.619303    4093 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0819 04:17:24.621228    4093 start.go:563] Will wait 60s for crictl version
	I0819 04:17:24.621274    4093 ssh_runner.go:195] Run: which crictl
	I0819 04:17:24.622738    4093 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 04:17:24.636891    4093 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0819 04:17:24.636957    4093 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 04:17:24.653133    4093 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 04:17:24.678996    4093 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0819 04:17:24.679060    4093 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0819 04:17:24.680393    4093 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 04:17:24.684188    4093 kubeadm.go:883] updating cluster {Name:stopped-upgrade-446000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50464 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-446000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0819 04:17:24.684232    4093 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0819 04:17:24.684272    4093 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0819 04:17:24.694556    4093 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0819 04:17:24.694575    4093 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0819 04:17:24.694619    4093 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0819 04:17:24.697485    4093 ssh_runner.go:195] Run: which lz4
	I0819 04:17:24.698902    4093 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 04:17:24.700124    4093 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 04:17:24.700133    4093 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0819 04:17:25.646464    4093 docker.go:649] duration metric: took 947.6025ms to copy over tarball
	I0819 04:17:25.646525    4093 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 04:17:26.808560    4093 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.162036s)
	I0819 04:17:26.808573    4093 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 04:17:26.824313    4093 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0819 04:17:26.828109    4093 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0819 04:17:26.833078    4093 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 04:17:26.902240    4093 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 04:17:29.212033    4093 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.309806375s)
	I0819 04:17:29.212143    4093 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0819 04:17:29.223914    4093 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0819 04:17:29.223923    4093 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0819 04:17:29.223927    4093 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 04:17:29.227778    4093 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 04:17:29.229435    4093 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0819 04:17:29.231267    4093 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0819 04:17:29.231810    4093 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 04:17:29.233244    4093 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0819 04:17:29.233527    4093 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 04:17:29.234996    4093 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0819 04:17:29.235025    4093 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0819 04:17:29.236981    4093 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 04:17:29.237002    4093 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0819 04:17:29.238613    4093 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0819 04:17:29.238668    4093 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 04:17:29.239633    4093 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0819 04:17:29.239714    4093 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0819 04:17:29.240456    4093 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 04:17:29.241031    4093 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0819 04:17:29.664630    4093 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0819 04:17:29.665515    4093 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0819 04:17:29.675408    4093 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 04:17:29.685046    4093 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0819 04:17:29.685088    4093 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0819 04:17:29.685047    4093 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0819 04:17:29.685121    4093 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0819 04:17:29.685143    4093 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0819 04:17:29.685151    4093 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0819 04:17:29.688893    4093 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0819 04:17:29.688917    4093 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 04:17:29.688971    4093 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 04:17:29.693295    4093 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0819 04:17:29.700610    4093 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0819 04:17:29.700653    4093 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0819 04:17:29.711932    4093 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0819 04:17:29.714999    4093 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0819 04:17:29.716267    4093 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0819 04:17:29.716286    4093 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0819 04:17:29.716318    4093 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0819 04:17:29.728123    4093 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0819 04:17:29.728147    4093 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0819 04:17:29.728203    4093 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0819 04:17:29.728826    4093 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0819 04:17:29.742488    4093 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0819 04:17:29.742610    4093 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0819 04:17:29.745045    4093 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0819 04:17:29.745058    4093 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0819 04:17:29.752999    4093 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0819 04:17:29.753009    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0819 04:17:29.753647    4093 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0819 04:17:29.753757    4093 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0819 04:17:29.763798    4093 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0819 04:17:29.790787    4093 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0819 04:17:29.790837    4093 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0819 04:17:29.790857    4093 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 04:17:29.790864    4093 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0819 04:17:29.790874    4093 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0819 04:17:29.790913    4093 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0819 04:17:29.790913    4093 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0819 04:17:29.815237    4093 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0819 04:17:29.815250    4093 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0819 04:17:29.815350    4093 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0819 04:17:29.816789    4093 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0819 04:17:29.816803    4093 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	W0819 04:17:29.851727    4093 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0819 04:17:29.851824    4093 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 04:17:29.855577    4093 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0819 04:17:29.855588    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0819 04:17:29.866791    4093 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0819 04:17:29.866816    4093 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 04:17:29.866883    4093 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 04:17:29.899966    4093 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0819 04:17:29.900008    4093 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0819 04:17:29.900112    4093 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0819 04:17:29.901581    4093 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0819 04:17:29.901594    4093 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0819 04:17:29.930333    4093 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0819 04:17:29.930348    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0819 04:17:30.168392    4093 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0819 04:17:30.168430    4093 cache_images.go:92] duration metric: took 944.508166ms to LoadCachedImages
	W0819 04:17:30.168464    4093 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0819 04:17:30.168471    4093 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0819 04:17:30.168523    4093 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-446000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-446000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 04:17:30.168591    4093 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0819 04:17:30.183720    4093 cni.go:84] Creating CNI manager for ""
	I0819 04:17:30.183732    4093 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:17:30.183738    4093 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 04:17:30.183747    4093 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-446000 NodeName:stopped-upgrade-446000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 04:17:30.183811    4093 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-446000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 04:17:30.183875    4093 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0819 04:17:30.187158    4093 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 04:17:30.187184    4093 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 04:17:30.189654    4093 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0819 04:17:30.194534    4093 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 04:17:30.199101    4093 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0819 04:17:30.204375    4093 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0819 04:17:30.205508    4093 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 04:17:30.209257    4093 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 04:17:30.274671    4093 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 04:17:30.279980    4093 certs.go:68] Setting up /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/stopped-upgrade-446000 for IP: 10.0.2.15
	I0819 04:17:30.279997    4093 certs.go:194] generating shared ca certs ...
	I0819 04:17:30.280005    4093 certs.go:226] acquiring lock for ca certs: {Name:mk0a363c308d59dcc2ce68f87ac07833cd4c8b8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:17:30.280165    4093 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19476-967/.minikube/ca.key
	I0819 04:17:30.280222    4093 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19476-967/.minikube/proxy-client-ca.key
	I0819 04:17:30.280227    4093 certs.go:256] generating profile certs ...
	I0819 04:17:30.280334    4093 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/stopped-upgrade-446000/client.key
	I0819 04:17:30.280353    4093 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/stopped-upgrade-446000/apiserver.key.79083a89
	I0819 04:17:30.280373    4093 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/stopped-upgrade-446000/apiserver.crt.79083a89 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0819 04:17:30.377677    4093 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/stopped-upgrade-446000/apiserver.crt.79083a89 ...
	I0819 04:17:30.377689    4093 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/stopped-upgrade-446000/apiserver.crt.79083a89: {Name:mk6e775c3f27064abb4a4684c0772522306ade8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:17:30.378130    4093 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/stopped-upgrade-446000/apiserver.key.79083a89 ...
	I0819 04:17:30.378142    4093 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/stopped-upgrade-446000/apiserver.key.79083a89: {Name:mkf086d304ce0538594ff4dfb6a94e5895aa61d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:17:30.378302    4093 certs.go:381] copying /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/stopped-upgrade-446000/apiserver.crt.79083a89 -> /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/stopped-upgrade-446000/apiserver.crt
	I0819 04:17:30.378442    4093 certs.go:385] copying /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/stopped-upgrade-446000/apiserver.key.79083a89 -> /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/stopped-upgrade-446000/apiserver.key
	I0819 04:17:30.378598    4093 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/stopped-upgrade-446000/proxy-client.key
	I0819 04:17:30.378729    4093 certs.go:484] found cert: /Users/jenkins/minikube-integration/19476-967/.minikube/certs/1434.pem (1338 bytes)
	W0819 04:17:30.378758    4093 certs.go:480] ignoring /Users/jenkins/minikube-integration/19476-967/.minikube/certs/1434_empty.pem, impossibly tiny 0 bytes
	I0819 04:17:30.378762    4093 certs.go:484] found cert: /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 04:17:30.378783    4093 certs.go:484] found cert: /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem (1078 bytes)
	I0819 04:17:30.378801    4093 certs.go:484] found cert: /Users/jenkins/minikube-integration/19476-967/.minikube/certs/cert.pem (1123 bytes)
	I0819 04:17:30.378819    4093 certs.go:484] found cert: /Users/jenkins/minikube-integration/19476-967/.minikube/certs/key.pem (1675 bytes)
	I0819 04:17:30.378859    4093 certs.go:484] found cert: /Users/jenkins/minikube-integration/19476-967/.minikube/files/etc/ssl/certs/14342.pem (1708 bytes)
	I0819 04:17:30.379215    4093 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 04:17:30.386119    4093 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0819 04:17:30.393016    4093 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 04:17:30.400444    4093 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 04:17:30.407972    4093 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/stopped-upgrade-446000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0819 04:17:30.414791    4093 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/stopped-upgrade-446000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 04:17:30.421593    4093 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/stopped-upgrade-446000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 04:17:30.429346    4093 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/stopped-upgrade-446000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 04:17:30.436845    4093 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/certs/1434.pem --> /usr/share/ca-certificates/1434.pem (1338 bytes)
	I0819 04:17:30.444461    4093 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/files/etc/ssl/certs/14342.pem --> /usr/share/ca-certificates/14342.pem (1708 bytes)
	I0819 04:17:30.451710    4093 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19476-967/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 04:17:30.458686    4093 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 04:17:30.463635    4093 ssh_runner.go:195] Run: openssl version
	I0819 04:17:30.465607    4093 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1434.pem && ln -fs /usr/share/ca-certificates/1434.pem /etc/ssl/certs/1434.pem"
	I0819 04:17:30.469105    4093 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1434.pem
	I0819 04:17:30.470648    4093 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 10:42 /usr/share/ca-certificates/1434.pem
	I0819 04:17:30.470673    4093 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1434.pem
	I0819 04:17:30.472447    4093 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1434.pem /etc/ssl/certs/51391683.0"
	I0819 04:17:30.475642    4093 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14342.pem && ln -fs /usr/share/ca-certificates/14342.pem /etc/ssl/certs/14342.pem"
	I0819 04:17:30.478559    4093 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14342.pem
	I0819 04:17:30.479861    4093 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 10:42 /usr/share/ca-certificates/14342.pem
	I0819 04:17:30.479880    4093 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14342.pem
	I0819 04:17:30.481644    4093 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14342.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 04:17:30.484919    4093 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 04:17:30.488398    4093 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 04:17:30.489941    4093 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 10:35 /usr/share/ca-certificates/minikubeCA.pem
	I0819 04:17:30.489960    4093 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 04:17:30.491827    4093 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 04:17:30.494800    4093 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 04:17:30.496340    4093 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 04:17:30.498421    4093 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 04:17:30.500183    4093 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 04:17:30.502259    4093 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 04:17:30.504070    4093 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 04:17:30.506050    4093 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 04:17:30.507855    4093 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-446000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50464 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-446000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0819 04:17:30.507916    4093 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0819 04:17:30.518237    4093 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 04:17:30.521236    4093 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 04:17:30.521241    4093 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 04:17:30.521270    4093 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 04:17:30.525085    4093 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 04:17:30.525393    4093 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-446000" does not appear in /Users/jenkins/minikube-integration/19476-967/kubeconfig
	I0819 04:17:30.525486    4093 kubeconfig.go:62] /Users/jenkins/minikube-integration/19476-967/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-446000" cluster setting kubeconfig missing "stopped-upgrade-446000" context setting]
	I0819 04:17:30.525678    4093 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19476-967/kubeconfig: {Name:mkcc8b27cbda2ef567c4911aa335c1e1951a7d2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:17:30.526123    4093 kapi.go:59] client config for stopped-upgrade-446000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19476-967/.minikube/profiles/stopped-upgrade-446000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19476-967/.minikube/profiles/stopped-upgrade-446000/client.key", CAFile:"/Users/jenkins/minikube-integration/19476-967/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102335610), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 04:17:30.526453    4093 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 04:17:30.529267    4093 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-446000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0819 04:17:30.529273    4093 kubeadm.go:1160] stopping kube-system containers ...
	I0819 04:17:30.529312    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0819 04:17:30.540264    4093 docker.go:483] Stopping containers: [ce491870b40f 672093e300cc b3b1f57bf431 f5cd372c916c f610b8f4a094 12cba185f1e7 6add09fad9b2 d5f6a5d583d3 a97e1971b34a]
	I0819 04:17:30.540331    4093 ssh_runner.go:195] Run: docker stop ce491870b40f 672093e300cc b3b1f57bf431 f5cd372c916c f610b8f4a094 12cba185f1e7 6add09fad9b2 d5f6a5d583d3 a97e1971b34a
	I0819 04:17:30.551450    4093 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 04:17:30.556667    4093 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 04:17:30.559880    4093 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 04:17:30.559887    4093 kubeadm.go:157] found existing configuration files:
	
	I0819 04:17:30.559908    4093 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50464 /etc/kubernetes/admin.conf
	I0819 04:17:30.562333    4093 kubeadm.go:163] "https://control-plane.minikube.internal:50464" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50464 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 04:17:30.562355    4093 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 04:17:30.565033    4093 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50464 /etc/kubernetes/kubelet.conf
	I0819 04:17:30.567827    4093 kubeadm.go:163] "https://control-plane.minikube.internal:50464" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50464 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 04:17:30.567854    4093 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 04:17:30.570440    4093 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50464 /etc/kubernetes/controller-manager.conf
	I0819 04:17:30.573042    4093 kubeadm.go:163] "https://control-plane.minikube.internal:50464" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50464 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 04:17:30.573062    4093 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 04:17:30.576097    4093 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50464 /etc/kubernetes/scheduler.conf
	I0819 04:17:30.578474    4093 kubeadm.go:163] "https://control-plane.minikube.internal:50464" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50464 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 04:17:30.578494    4093 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 04:17:30.581041    4093 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 04:17:30.583904    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 04:17:30.607165    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 04:17:31.096972    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 04:17:31.211552    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 04:17:31.238303    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 04:17:31.258495    4093 api_server.go:52] waiting for apiserver process to appear ...
	I0819 04:17:31.258585    4093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 04:17:31.760743    4093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 04:17:32.260629    4093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 04:17:32.266629    4093 api_server.go:72] duration metric: took 1.0081475s to wait for apiserver process to appear ...
	I0819 04:17:32.266639    4093 api_server.go:88] waiting for apiserver healthz status ...
	I0819 04:17:32.266653    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:17:37.268725    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:17:37.268751    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:17:42.268952    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:17:42.268994    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:17:47.269365    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:17:47.269386    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:17:52.269746    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:17:52.269767    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:17:57.270283    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:17:57.270331    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:18:02.270974    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:18:02.271021    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:18:07.272360    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:18:07.272387    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:18:12.273585    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:18:12.273606    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:18:17.274050    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:18:17.274137    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:18:22.276391    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:18:22.276425    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:18:27.278576    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:18:27.278602    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:18:32.280770    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:18:32.280947    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:18:32.292778    4093 logs.go:276] 2 containers: [857a1390fd04 b3b1f57bf431]
	I0819 04:18:32.292857    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:18:32.304126    4093 logs.go:276] 2 containers: [be42f13859d1 672093e300cc]
	I0819 04:18:32.304202    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:18:32.314991    4093 logs.go:276] 1 containers: [7bd1561a8a6f]
	I0819 04:18:32.315051    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:18:32.325980    4093 logs.go:276] 2 containers: [d95ed659ab7f 6add09fad9b2]
	I0819 04:18:32.326065    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:18:32.336613    4093 logs.go:276] 1 containers: [bc99c20c6575]
	I0819 04:18:32.336684    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:18:32.346922    4093 logs.go:276] 2 containers: [c08aada44f32 ce491870b40f]
	I0819 04:18:32.346998    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:18:32.366953    4093 logs.go:276] 0 containers: []
	W0819 04:18:32.366964    4093 logs.go:278] No container was found matching "kindnet"
	I0819 04:18:32.367022    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:18:32.378966    4093 logs.go:276] 2 containers: [3e4479afe33e 343dec6784e0]
	I0819 04:18:32.378986    4093 logs.go:123] Gathering logs for etcd [672093e300cc] ...
	I0819 04:18:32.378992    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 672093e300cc"
	I0819 04:18:32.393609    4093 logs.go:123] Gathering logs for kube-controller-manager [c08aada44f32] ...
	I0819 04:18:32.393620    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08aada44f32"
	I0819 04:18:32.410720    4093 logs.go:123] Gathering logs for storage-provisioner [3e4479afe33e] ...
	I0819 04:18:32.410731    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e4479afe33e"
	I0819 04:18:32.426516    4093 logs.go:123] Gathering logs for Docker ...
	I0819 04:18:32.426529    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:18:32.453102    4093 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:18:32.453109    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:18:32.532657    4093 logs.go:123] Gathering logs for etcd [be42f13859d1] ...
	I0819 04:18:32.532668    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be42f13859d1"
	I0819 04:18:32.547021    4093 logs.go:123] Gathering logs for kube-controller-manager [ce491870b40f] ...
	I0819 04:18:32.547033    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce491870b40f"
	I0819 04:18:32.559374    4093 logs.go:123] Gathering logs for storage-provisioner [343dec6784e0] ...
	I0819 04:18:32.559384    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 343dec6784e0"
	I0819 04:18:32.570607    4093 logs.go:123] Gathering logs for kubelet ...
	I0819 04:18:32.570617    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:18:32.607262    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:18:32.607357    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:18:32.607933    4093 logs.go:123] Gathering logs for dmesg ...
	I0819 04:18:32.607938    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:18:32.612211    4093 logs.go:123] Gathering logs for kube-scheduler [6add09fad9b2] ...
	I0819 04:18:32.612220    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6add09fad9b2"
	I0819 04:18:32.635153    4093 logs.go:123] Gathering logs for kube-apiserver [857a1390fd04] ...
	I0819 04:18:32.635167    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857a1390fd04"
	I0819 04:18:32.649919    4093 logs.go:123] Gathering logs for kube-apiserver [b3b1f57bf431] ...
	I0819 04:18:32.649929    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3b1f57bf431"
	I0819 04:18:32.693640    4093 logs.go:123] Gathering logs for coredns [7bd1561a8a6f] ...
	I0819 04:18:32.693652    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bd1561a8a6f"
	I0819 04:18:32.709018    4093 logs.go:123] Gathering logs for kube-scheduler [d95ed659ab7f] ...
	I0819 04:18:32.709032    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d95ed659ab7f"
	I0819 04:18:32.720762    4093 logs.go:123] Gathering logs for kube-proxy [bc99c20c6575] ...
	I0819 04:18:32.720778    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc99c20c6575"
	I0819 04:18:32.733314    4093 logs.go:123] Gathering logs for container status ...
	I0819 04:18:32.733329    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:18:32.745367    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:18:32.745378    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:18:32.745406    4093 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0819 04:18:32.745410    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	  Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:18:32.745414    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	  Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:18:32.745419    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:18:32.745422    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:18:42.749522    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:18:47.751902    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:18:47.752149    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:18:47.776468    4093 logs.go:276] 2 containers: [857a1390fd04 b3b1f57bf431]
	I0819 04:18:47.776556    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:18:47.790711    4093 logs.go:276] 2 containers: [be42f13859d1 672093e300cc]
	I0819 04:18:47.790793    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:18:47.804641    4093 logs.go:276] 1 containers: [7bd1561a8a6f]
	I0819 04:18:47.804714    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:18:47.818541    4093 logs.go:276] 2 containers: [d95ed659ab7f 6add09fad9b2]
	I0819 04:18:47.818608    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:18:47.829723    4093 logs.go:276] 1 containers: [bc99c20c6575]
	I0819 04:18:47.829796    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:18:47.840575    4093 logs.go:276] 2 containers: [c08aada44f32 ce491870b40f]
	I0819 04:18:47.840646    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:18:47.851149    4093 logs.go:276] 0 containers: []
	W0819 04:18:47.851162    4093 logs.go:278] No container was found matching "kindnet"
	I0819 04:18:47.851227    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:18:47.860941    4093 logs.go:276] 2 containers: [3e4479afe33e 343dec6784e0]
	I0819 04:18:47.860961    4093 logs.go:123] Gathering logs for coredns [7bd1561a8a6f] ...
	I0819 04:18:47.860966    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bd1561a8a6f"
	I0819 04:18:47.872434    4093 logs.go:123] Gathering logs for kube-scheduler [6add09fad9b2] ...
	I0819 04:18:47.872446    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6add09fad9b2"
	I0819 04:18:47.893367    4093 logs.go:123] Gathering logs for kube-proxy [bc99c20c6575] ...
	I0819 04:18:47.893377    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc99c20c6575"
	I0819 04:18:47.904938    4093 logs.go:123] Gathering logs for kubelet ...
	I0819 04:18:47.904947    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:18:47.943731    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:18:47.943823    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:18:47.944392    4093 logs.go:123] Gathering logs for dmesg ...
	I0819 04:18:47.944398    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:18:47.948492    4093 logs.go:123] Gathering logs for etcd [672093e300cc] ...
	I0819 04:18:47.948499    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 672093e300cc"
	I0819 04:18:47.974199    4093 logs.go:123] Gathering logs for storage-provisioner [3e4479afe33e] ...
	I0819 04:18:47.974210    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e4479afe33e"
	I0819 04:18:47.985526    4093 logs.go:123] Gathering logs for container status ...
	I0819 04:18:47.985536    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:18:47.999306    4093 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:18:47.999319    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:18:48.036186    4093 logs.go:123] Gathering logs for kube-apiserver [857a1390fd04] ...
	I0819 04:18:48.036197    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857a1390fd04"
	I0819 04:18:48.051969    4093 logs.go:123] Gathering logs for kube-apiserver [b3b1f57bf431] ...
	I0819 04:18:48.051979    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3b1f57bf431"
	I0819 04:18:48.088551    4093 logs.go:123] Gathering logs for kube-scheduler [d95ed659ab7f] ...
	I0819 04:18:48.088561    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d95ed659ab7f"
	I0819 04:18:48.100090    4093 logs.go:123] Gathering logs for storage-provisioner [343dec6784e0] ...
	I0819 04:18:48.100101    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 343dec6784e0"
	I0819 04:18:48.111018    4093 logs.go:123] Gathering logs for Docker ...
	I0819 04:18:48.111028    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:18:48.134806    4093 logs.go:123] Gathering logs for etcd [be42f13859d1] ...
	I0819 04:18:48.134814    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be42f13859d1"
	I0819 04:18:48.148317    4093 logs.go:123] Gathering logs for kube-controller-manager [c08aada44f32] ...
	I0819 04:18:48.148327    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08aada44f32"
	I0819 04:18:48.166121    4093 logs.go:123] Gathering logs for kube-controller-manager [ce491870b40f] ...
	I0819 04:18:48.166132    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce491870b40f"
	I0819 04:18:48.178291    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:18:48.178301    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:18:48.178326    4093 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0819 04:18:48.178332    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	  Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:18:48.178335    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	  Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:18:48.178338    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:18:48.178341    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:18:58.182328    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:19:03.184667    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:19:03.184952    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:19:03.210439    4093 logs.go:276] 2 containers: [857a1390fd04 b3b1f57bf431]
	I0819 04:19:03.210551    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:19:03.226683    4093 logs.go:276] 2 containers: [be42f13859d1 672093e300cc]
	I0819 04:19:03.226767    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:19:03.240114    4093 logs.go:276] 1 containers: [7bd1561a8a6f]
	I0819 04:19:03.240193    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:19:03.251339    4093 logs.go:276] 2 containers: [d95ed659ab7f 6add09fad9b2]
	I0819 04:19:03.251414    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:19:03.261398    4093 logs.go:276] 1 containers: [bc99c20c6575]
	I0819 04:19:03.261467    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:19:03.275234    4093 logs.go:276] 2 containers: [c08aada44f32 ce491870b40f]
	I0819 04:19:03.275307    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:19:03.287119    4093 logs.go:276] 0 containers: []
	W0819 04:19:03.287134    4093 logs.go:278] No container was found matching "kindnet"
	I0819 04:19:03.287195    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:19:03.302690    4093 logs.go:276] 2 containers: [3e4479afe33e 343dec6784e0]
	I0819 04:19:03.302712    4093 logs.go:123] Gathering logs for kube-apiserver [857a1390fd04] ...
	I0819 04:19:03.302718    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857a1390fd04"
	I0819 04:19:03.316398    4093 logs.go:123] Gathering logs for kube-apiserver [b3b1f57bf431] ...
	I0819 04:19:03.316408    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3b1f57bf431"
	I0819 04:19:03.354008    4093 logs.go:123] Gathering logs for etcd [be42f13859d1] ...
	I0819 04:19:03.354021    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be42f13859d1"
	I0819 04:19:03.375292    4093 logs.go:123] Gathering logs for storage-provisioner [3e4479afe33e] ...
	I0819 04:19:03.375304    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e4479afe33e"
	I0819 04:19:03.391160    4093 logs.go:123] Gathering logs for Docker ...
	I0819 04:19:03.391171    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:19:03.414326    4093 logs.go:123] Gathering logs for container status ...
	I0819 04:19:03.414335    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:19:03.425998    4093 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:19:03.426008    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:19:03.462595    4093 logs.go:123] Gathering logs for etcd [672093e300cc] ...
	I0819 04:19:03.462608    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 672093e300cc"
	I0819 04:19:03.480539    4093 logs.go:123] Gathering logs for kube-scheduler [d95ed659ab7f] ...
	I0819 04:19:03.480551    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d95ed659ab7f"
	I0819 04:19:03.492613    4093 logs.go:123] Gathering logs for kube-controller-manager [c08aada44f32] ...
	I0819 04:19:03.492625    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08aada44f32"
	I0819 04:19:03.509800    4093 logs.go:123] Gathering logs for kube-controller-manager [ce491870b40f] ...
	I0819 04:19:03.509812    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce491870b40f"
	I0819 04:19:03.522081    4093 logs.go:123] Gathering logs for kubelet ...
	I0819 04:19:03.522091    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:19:03.557885    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:19:03.557977    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:19:03.558563    4093 logs.go:123] Gathering logs for coredns [7bd1561a8a6f] ...
	I0819 04:19:03.558568    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bd1561a8a6f"
	I0819 04:19:03.570334    4093 logs.go:123] Gathering logs for kube-scheduler [6add09fad9b2] ...
	I0819 04:19:03.570346    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6add09fad9b2"
	I0819 04:19:03.592291    4093 logs.go:123] Gathering logs for kube-proxy [bc99c20c6575] ...
	I0819 04:19:03.592302    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc99c20c6575"
	I0819 04:19:03.604294    4093 logs.go:123] Gathering logs for storage-provisioner [343dec6784e0] ...
	I0819 04:19:03.604304    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 343dec6784e0"
	I0819 04:19:03.614962    4093 logs.go:123] Gathering logs for dmesg ...
	I0819 04:19:03.614974    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:19:03.619005    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:19:03.619015    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:19:03.619043    4093 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0819 04:19:03.619048    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	  Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:19:03.619051    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	  Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:19:03.619055    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:19:03.619059    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:19:13.623089    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:19:18.625438    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:19:18.625678    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:19:18.650953    4093 logs.go:276] 2 containers: [857a1390fd04 b3b1f57bf431]
	I0819 04:19:18.651041    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:19:18.662990    4093 logs.go:276] 2 containers: [be42f13859d1 672093e300cc]
	I0819 04:19:18.663071    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:19:18.673783    4093 logs.go:276] 1 containers: [7bd1561a8a6f]
	I0819 04:19:18.673856    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:19:18.683561    4093 logs.go:276] 2 containers: [d95ed659ab7f 6add09fad9b2]
	I0819 04:19:18.683630    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:19:18.693845    4093 logs.go:276] 1 containers: [bc99c20c6575]
	I0819 04:19:18.693915    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:19:18.704870    4093 logs.go:276] 2 containers: [c08aada44f32 ce491870b40f]
	I0819 04:19:18.704943    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:19:18.715006    4093 logs.go:276] 0 containers: []
	W0819 04:19:18.715022    4093 logs.go:278] No container was found matching "kindnet"
	I0819 04:19:18.715081    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:19:18.726070    4093 logs.go:276] 2 containers: [3e4479afe33e 343dec6784e0]
	I0819 04:19:18.726086    4093 logs.go:123] Gathering logs for etcd [672093e300cc] ...
	I0819 04:19:18.726091    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 672093e300cc"
	I0819 04:19:18.740650    4093 logs.go:123] Gathering logs for kube-scheduler [d95ed659ab7f] ...
	I0819 04:19:18.740664    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d95ed659ab7f"
	I0819 04:19:18.752936    4093 logs.go:123] Gathering logs for kube-scheduler [6add09fad9b2] ...
	I0819 04:19:18.752948    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6add09fad9b2"
	I0819 04:19:18.774349    4093 logs.go:123] Gathering logs for container status ...
	I0819 04:19:18.774362    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:19:18.786093    4093 logs.go:123] Gathering logs for dmesg ...
	I0819 04:19:18.786107    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:19:18.790315    4093 logs.go:123] Gathering logs for kube-apiserver [857a1390fd04] ...
	I0819 04:19:18.790322    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857a1390fd04"
	I0819 04:19:18.808917    4093 logs.go:123] Gathering logs for etcd [be42f13859d1] ...
	I0819 04:19:18.808929    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be42f13859d1"
	I0819 04:19:18.823306    4093 logs.go:123] Gathering logs for kubelet ...
	I0819 04:19:18.823318    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:19:18.861545    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:19:18.861638    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:19:18.862243    4093 logs.go:123] Gathering logs for kube-apiserver [b3b1f57bf431] ...
	I0819 04:19:18.862253    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3b1f57bf431"
	I0819 04:19:18.899580    4093 logs.go:123] Gathering logs for kube-proxy [bc99c20c6575] ...
	I0819 04:19:18.899593    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc99c20c6575"
	I0819 04:19:18.911250    4093 logs.go:123] Gathering logs for Docker ...
	I0819 04:19:18.911260    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:19:18.935032    4093 logs.go:123] Gathering logs for coredns [7bd1561a8a6f] ...
	I0819 04:19:18.935041    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bd1561a8a6f"
	I0819 04:19:18.946520    4093 logs.go:123] Gathering logs for kube-controller-manager [ce491870b40f] ...
	I0819 04:19:18.946532    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce491870b40f"
	I0819 04:19:18.959434    4093 logs.go:123] Gathering logs for storage-provisioner [343dec6784e0] ...
	I0819 04:19:18.959443    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 343dec6784e0"
	I0819 04:19:18.970559    4093 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:19:18.970573    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:19:19.006262    4093 logs.go:123] Gathering logs for kube-controller-manager [c08aada44f32] ...
	I0819 04:19:19.006277    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08aada44f32"
	I0819 04:19:19.024064    4093 logs.go:123] Gathering logs for storage-provisioner [3e4479afe33e] ...
	I0819 04:19:19.024082    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e4479afe33e"
	I0819 04:19:19.043613    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:19:19.043622    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:19:19.043652    4093 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0819 04:19:19.043657    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	  Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:19:19.043661    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	  Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:19:19.043667    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:19:19.043670    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:19:29.047768    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:19:34.050120    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:19:34.050370    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:19:34.073332    4093 logs.go:276] 2 containers: [857a1390fd04 b3b1f57bf431]
	I0819 04:19:34.073437    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:19:34.088154    4093 logs.go:276] 2 containers: [be42f13859d1 672093e300cc]
	I0819 04:19:34.088238    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:19:34.100032    4093 logs.go:276] 1 containers: [7bd1561a8a6f]
	I0819 04:19:34.100096    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:19:34.111169    4093 logs.go:276] 2 containers: [d95ed659ab7f 6add09fad9b2]
	I0819 04:19:34.111237    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:19:34.128057    4093 logs.go:276] 1 containers: [bc99c20c6575]
	I0819 04:19:34.128130    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:19:34.138646    4093 logs.go:276] 2 containers: [c08aada44f32 ce491870b40f]
	I0819 04:19:34.138717    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:19:34.153625    4093 logs.go:276] 0 containers: []
	W0819 04:19:34.153637    4093 logs.go:278] No container was found matching "kindnet"
	I0819 04:19:34.153716    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:19:34.164515    4093 logs.go:276] 2 containers: [3e4479afe33e 343dec6784e0]
	I0819 04:19:34.164535    4093 logs.go:123] Gathering logs for dmesg ...
	I0819 04:19:34.164541    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:19:34.169254    4093 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:19:34.169261    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:19:34.204144    4093 logs.go:123] Gathering logs for kube-scheduler [6add09fad9b2] ...
	I0819 04:19:34.204155    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6add09fad9b2"
	I0819 04:19:34.225459    4093 logs.go:123] Gathering logs for kube-controller-manager [c08aada44f32] ...
	I0819 04:19:34.225468    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08aada44f32"
	I0819 04:19:34.247819    4093 logs.go:123] Gathering logs for storage-provisioner [3e4479afe33e] ...
	I0819 04:19:34.247828    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e4479afe33e"
	I0819 04:19:34.259760    4093 logs.go:123] Gathering logs for Docker ...
	I0819 04:19:34.259771    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:19:34.284586    4093 logs.go:123] Gathering logs for container status ...
	I0819 04:19:34.284595    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:19:34.297908    4093 logs.go:123] Gathering logs for kube-apiserver [857a1390fd04] ...
	I0819 04:19:34.297921    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857a1390fd04"
	I0819 04:19:34.313376    4093 logs.go:123] Gathering logs for kube-apiserver [b3b1f57bf431] ...
	I0819 04:19:34.313386    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3b1f57bf431"
	I0819 04:19:34.351299    4093 logs.go:123] Gathering logs for etcd [be42f13859d1] ...
	I0819 04:19:34.351311    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be42f13859d1"
	I0819 04:19:34.365480    4093 logs.go:123] Gathering logs for kube-proxy [bc99c20c6575] ...
	I0819 04:19:34.365492    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc99c20c6575"
	I0819 04:19:34.377433    4093 logs.go:123] Gathering logs for kube-controller-manager [ce491870b40f] ...
	I0819 04:19:34.377444    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce491870b40f"
	I0819 04:19:34.389990    4093 logs.go:123] Gathering logs for storage-provisioner [343dec6784e0] ...
	I0819 04:19:34.390002    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 343dec6784e0"
	I0819 04:19:34.401517    4093 logs.go:123] Gathering logs for kubelet ...
	I0819 04:19:34.401527    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:19:34.439638    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:19:34.439731    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:19:34.440329    4093 logs.go:123] Gathering logs for etcd [672093e300cc] ...
	I0819 04:19:34.440334    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 672093e300cc"
	I0819 04:19:34.455050    4093 logs.go:123] Gathering logs for coredns [7bd1561a8a6f] ...
	I0819 04:19:34.455063    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bd1561a8a6f"
	I0819 04:19:34.465953    4093 logs.go:123] Gathering logs for kube-scheduler [d95ed659ab7f] ...
	I0819 04:19:34.465964    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d95ed659ab7f"
	I0819 04:19:34.482787    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:19:34.482799    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:19:34.482824    4093 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0819 04:19:34.482830    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	  Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:19:34.482834    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	  Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:19:34.482838    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:19:34.482844    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:19:44.484874    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:19:49.487250    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:19:49.487596    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:19:49.517690    4093 logs.go:276] 2 containers: [857a1390fd04 b3b1f57bf431]
	I0819 04:19:49.517815    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:19:49.535617    4093 logs.go:276] 2 containers: [be42f13859d1 672093e300cc]
	I0819 04:19:49.535713    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:19:49.553514    4093 logs.go:276] 1 containers: [7bd1561a8a6f]
	I0819 04:19:49.553585    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:19:49.571525    4093 logs.go:276] 2 containers: [d95ed659ab7f 6add09fad9b2]
	I0819 04:19:49.571589    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:19:49.582039    4093 logs.go:276] 1 containers: [bc99c20c6575]
	I0819 04:19:49.582097    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:19:49.592560    4093 logs.go:276] 2 containers: [c08aada44f32 ce491870b40f]
	I0819 04:19:49.592638    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:19:49.602449    4093 logs.go:276] 0 containers: []
	W0819 04:19:49.602461    4093 logs.go:278] No container was found matching "kindnet"
	I0819 04:19:49.602524    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:19:49.613146    4093 logs.go:276] 2 containers: [3e4479afe33e 343dec6784e0]
	I0819 04:19:49.613164    4093 logs.go:123] Gathering logs for kubelet ...
	I0819 04:19:49.613169    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:19:49.650750    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:19:49.650846    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:19:49.651452    4093 logs.go:123] Gathering logs for dmesg ...
	I0819 04:19:49.651459    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:19:49.655642    4093 logs.go:123] Gathering logs for coredns [7bd1561a8a6f] ...
	I0819 04:19:49.655653    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bd1561a8a6f"
	I0819 04:19:49.670354    4093 logs.go:123] Gathering logs for etcd [be42f13859d1] ...
	I0819 04:19:49.670367    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be42f13859d1"
	I0819 04:19:49.684526    4093 logs.go:123] Gathering logs for kube-controller-manager [ce491870b40f] ...
	I0819 04:19:49.684537    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce491870b40f"
	I0819 04:19:49.697437    4093 logs.go:123] Gathering logs for storage-provisioner [343dec6784e0] ...
	I0819 04:19:49.697450    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 343dec6784e0"
	I0819 04:19:49.708741    4093 logs.go:123] Gathering logs for kube-controller-manager [c08aada44f32] ...
	I0819 04:19:49.708751    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08aada44f32"
	I0819 04:19:49.725836    4093 logs.go:123] Gathering logs for storage-provisioner [3e4479afe33e] ...
	I0819 04:19:49.725847    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e4479afe33e"
	I0819 04:19:49.737210    4093 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:19:49.737221    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:19:49.772950    4093 logs.go:123] Gathering logs for kube-apiserver [b3b1f57bf431] ...
	I0819 04:19:49.772963    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3b1f57bf431"
	I0819 04:19:49.812428    4093 logs.go:123] Gathering logs for kube-scheduler [d95ed659ab7f] ...
	I0819 04:19:49.812447    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d95ed659ab7f"
	I0819 04:19:49.824461    4093 logs.go:123] Gathering logs for kube-proxy [bc99c20c6575] ...
	I0819 04:19:49.824471    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc99c20c6575"
	I0819 04:19:49.836433    4093 logs.go:123] Gathering logs for container status ...
	I0819 04:19:49.836444    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:19:49.848506    4093 logs.go:123] Gathering logs for kube-apiserver [857a1390fd04] ...
	I0819 04:19:49.848517    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857a1390fd04"
	I0819 04:19:49.865790    4093 logs.go:123] Gathering logs for etcd [672093e300cc] ...
	I0819 04:19:49.865801    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 672093e300cc"
	I0819 04:19:49.880469    4093 logs.go:123] Gathering logs for kube-scheduler [6add09fad9b2] ...
	I0819 04:19:49.880482    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6add09fad9b2"
	I0819 04:19:49.901858    4093 logs.go:123] Gathering logs for Docker ...
	I0819 04:19:49.901870    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:19:49.925117    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:19:49.925128    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:19:49.925154    4093 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0819 04:19:49.925159    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	  Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:19:49.925162    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	  Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:19:49.925168    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:19:49.925171    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:19:59.927679    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:20:04.929927    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:20:04.930191    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:20:04.951346    4093 logs.go:276] 2 containers: [857a1390fd04 b3b1f57bf431]
	I0819 04:20:04.951447    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:20:04.966572    4093 logs.go:276] 2 containers: [be42f13859d1 672093e300cc]
	I0819 04:20:04.966660    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:20:04.978416    4093 logs.go:276] 1 containers: [7bd1561a8a6f]
	I0819 04:20:04.978487    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:20:04.989414    4093 logs.go:276] 2 containers: [d95ed659ab7f 6add09fad9b2]
	I0819 04:20:04.989482    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:20:04.999523    4093 logs.go:276] 1 containers: [bc99c20c6575]
	I0819 04:20:04.999589    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:20:05.014561    4093 logs.go:276] 2 containers: [c08aada44f32 ce491870b40f]
	I0819 04:20:05.014632    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:20:05.024496    4093 logs.go:276] 0 containers: []
	W0819 04:20:05.024506    4093 logs.go:278] No container was found matching "kindnet"
	I0819 04:20:05.024562    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:20:05.040124    4093 logs.go:276] 2 containers: [3e4479afe33e 343dec6784e0]
	I0819 04:20:05.040144    4093 logs.go:123] Gathering logs for kube-scheduler [6add09fad9b2] ...
	I0819 04:20:05.040150    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6add09fad9b2"
	I0819 04:20:05.065434    4093 logs.go:123] Gathering logs for kube-controller-manager [ce491870b40f] ...
	I0819 04:20:05.065447    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce491870b40f"
	I0819 04:20:05.077740    4093 logs.go:123] Gathering logs for Docker ...
	I0819 04:20:05.077752    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:20:05.102181    4093 logs.go:123] Gathering logs for dmesg ...
	I0819 04:20:05.102188    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:20:05.106215    4093 logs.go:123] Gathering logs for etcd [be42f13859d1] ...
	I0819 04:20:05.106222    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be42f13859d1"
	I0819 04:20:05.120535    4093 logs.go:123] Gathering logs for kube-scheduler [d95ed659ab7f] ...
	I0819 04:20:05.120544    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d95ed659ab7f"
	I0819 04:20:05.132290    4093 logs.go:123] Gathering logs for kube-proxy [bc99c20c6575] ...
	I0819 04:20:05.132300    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc99c20c6575"
	I0819 04:20:05.145031    4093 logs.go:123] Gathering logs for kubelet ...
	I0819 04:20:05.145041    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:20:05.183450    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:20:05.183561    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:20:05.184150    4093 logs.go:123] Gathering logs for kube-apiserver [857a1390fd04] ...
	I0819 04:20:05.184157    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857a1390fd04"
	I0819 04:20:05.200085    4093 logs.go:123] Gathering logs for coredns [7bd1561a8a6f] ...
	I0819 04:20:05.200097    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bd1561a8a6f"
	I0819 04:20:05.211272    4093 logs.go:123] Gathering logs for storage-provisioner [3e4479afe33e] ...
	I0819 04:20:05.211283    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e4479afe33e"
	I0819 04:20:05.222868    4093 logs.go:123] Gathering logs for storage-provisioner [343dec6784e0] ...
	I0819 04:20:05.222877    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 343dec6784e0"
	I0819 04:20:05.234393    4093 logs.go:123] Gathering logs for container status ...
	I0819 04:20:05.234403    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:20:05.246088    4093 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:20:05.246116    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:20:05.280507    4093 logs.go:123] Gathering logs for kube-apiserver [b3b1f57bf431] ...
	I0819 04:20:05.280518    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3b1f57bf431"
	I0819 04:20:05.317006    4093 logs.go:123] Gathering logs for etcd [672093e300cc] ...
	I0819 04:20:05.317018    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 672093e300cc"
	I0819 04:20:05.331720    4093 logs.go:123] Gathering logs for kube-controller-manager [c08aada44f32] ...
	I0819 04:20:05.331733    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08aada44f32"
	I0819 04:20:05.354051    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:20:05.354061    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:20:05.354087    4093 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0819 04:20:05.354091    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	  Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:20:05.354094    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	  Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:20:05.354098    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:20:05.354101    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:20:15.358219    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:20:20.360782    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:20:20.360995    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:20:20.377552    4093 logs.go:276] 2 containers: [857a1390fd04 b3b1f57bf431]
	I0819 04:20:20.377640    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:20:20.393174    4093 logs.go:276] 2 containers: [be42f13859d1 672093e300cc]
	I0819 04:20:20.393254    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:20:20.404901    4093 logs.go:276] 1 containers: [7bd1561a8a6f]
	I0819 04:20:20.404981    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:20:20.415785    4093 logs.go:276] 2 containers: [d95ed659ab7f 6add09fad9b2]
	I0819 04:20:20.415860    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:20:20.426635    4093 logs.go:276] 1 containers: [bc99c20c6575]
	I0819 04:20:20.426699    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:20:20.437226    4093 logs.go:276] 2 containers: [c08aada44f32 ce491870b40f]
	I0819 04:20:20.437297    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:20:20.447338    4093 logs.go:276] 0 containers: []
	W0819 04:20:20.447348    4093 logs.go:278] No container was found matching "kindnet"
	I0819 04:20:20.447401    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:20:20.457710    4093 logs.go:276] 2 containers: [3e4479afe33e 343dec6784e0]
	I0819 04:20:20.457726    4093 logs.go:123] Gathering logs for kubelet ...
	I0819 04:20:20.457731    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:20:20.496348    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:20:20.496439    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:20:20.497002    4093 logs.go:123] Gathering logs for kube-scheduler [d95ed659ab7f] ...
	I0819 04:20:20.497006    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d95ed659ab7f"
	I0819 04:20:20.508556    4093 logs.go:123] Gathering logs for kube-controller-manager [c08aada44f32] ...
	I0819 04:20:20.508567    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08aada44f32"
	I0819 04:20:20.529903    4093 logs.go:123] Gathering logs for kube-apiserver [b3b1f57bf431] ...
	I0819 04:20:20.529918    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3b1f57bf431"
	I0819 04:20:20.567714    4093 logs.go:123] Gathering logs for kube-controller-manager [ce491870b40f] ...
	I0819 04:20:20.567725    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce491870b40f"
	I0819 04:20:20.579798    4093 logs.go:123] Gathering logs for storage-provisioner [343dec6784e0] ...
	I0819 04:20:20.579808    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 343dec6784e0"
	I0819 04:20:20.591267    4093 logs.go:123] Gathering logs for etcd [be42f13859d1] ...
	I0819 04:20:20.591277    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be42f13859d1"
	I0819 04:20:20.604645    4093 logs.go:123] Gathering logs for storage-provisioner [3e4479afe33e] ...
	I0819 04:20:20.604657    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e4479afe33e"
	I0819 04:20:20.616288    4093 logs.go:123] Gathering logs for container status ...
	I0819 04:20:20.616303    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:20:20.628562    4093 logs.go:123] Gathering logs for dmesg ...
	I0819 04:20:20.628573    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:20:20.632694    4093 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:20:20.632701    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:20:20.669836    4093 logs.go:123] Gathering logs for kube-apiserver [857a1390fd04] ...
	I0819 04:20:20.669846    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857a1390fd04"
	I0819 04:20:20.690957    4093 logs.go:123] Gathering logs for kube-proxy [bc99c20c6575] ...
	I0819 04:20:20.690970    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc99c20c6575"
	I0819 04:20:20.702642    4093 logs.go:123] Gathering logs for Docker ...
	I0819 04:20:20.702653    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:20:20.726351    4093 logs.go:123] Gathering logs for etcd [672093e300cc] ...
	I0819 04:20:20.726359    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 672093e300cc"
	I0819 04:20:20.762769    4093 logs.go:123] Gathering logs for coredns [7bd1561a8a6f] ...
	I0819 04:20:20.762786    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bd1561a8a6f"
	I0819 04:20:20.784640    4093 logs.go:123] Gathering logs for kube-scheduler [6add09fad9b2] ...
	I0819 04:20:20.784652    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6add09fad9b2"
	I0819 04:20:20.805532    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:20:20.805542    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:20:20.805572    4093 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0819 04:20:20.805578    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	  Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:20:20.805581    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	  Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:20:20.805584    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:20:20.805587    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:20:30.809605    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:20:35.811893    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:20:35.812144    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:20:35.835262    4093 logs.go:276] 2 containers: [857a1390fd04 b3b1f57bf431]
	I0819 04:20:35.835386    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:20:35.853842    4093 logs.go:276] 2 containers: [be42f13859d1 672093e300cc]
	I0819 04:20:35.853916    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:20:35.867270    4093 logs.go:276] 1 containers: [7bd1561a8a6f]
	I0819 04:20:35.867349    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:20:35.878740    4093 logs.go:276] 2 containers: [d95ed659ab7f 6add09fad9b2]
	I0819 04:20:35.878807    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:20:35.889175    4093 logs.go:276] 1 containers: [bc99c20c6575]
	I0819 04:20:35.889237    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:20:35.899855    4093 logs.go:276] 2 containers: [c08aada44f32 ce491870b40f]
	I0819 04:20:35.899927    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:20:35.910096    4093 logs.go:276] 0 containers: []
	W0819 04:20:35.910107    4093 logs.go:278] No container was found matching "kindnet"
	I0819 04:20:35.910163    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:20:35.920889    4093 logs.go:276] 2 containers: [3e4479afe33e 343dec6784e0]
	I0819 04:20:35.920918    4093 logs.go:123] Gathering logs for etcd [672093e300cc] ...
	I0819 04:20:35.920923    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 672093e300cc"
	I0819 04:20:35.935130    4093 logs.go:123] Gathering logs for dmesg ...
	I0819 04:20:35.935140    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:20:35.939240    4093 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:20:35.939250    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:20:35.973499    4093 logs.go:123] Gathering logs for storage-provisioner [343dec6784e0] ...
	I0819 04:20:35.973511    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 343dec6784e0"
	I0819 04:20:35.985036    4093 logs.go:123] Gathering logs for container status ...
	I0819 04:20:35.985048    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:20:35.997307    4093 logs.go:123] Gathering logs for kube-apiserver [857a1390fd04] ...
	I0819 04:20:35.997321    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857a1390fd04"
	I0819 04:20:36.013059    4093 logs.go:123] Gathering logs for kube-scheduler [d95ed659ab7f] ...
	I0819 04:20:36.013072    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d95ed659ab7f"
	I0819 04:20:36.025103    4093 logs.go:123] Gathering logs for kube-proxy [bc99c20c6575] ...
	I0819 04:20:36.025112    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc99c20c6575"
	I0819 04:20:36.037199    4093 logs.go:123] Gathering logs for kube-controller-manager [c08aada44f32] ...
	I0819 04:20:36.037209    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08aada44f32"
	I0819 04:20:36.054705    4093 logs.go:123] Gathering logs for kube-controller-manager [ce491870b40f] ...
	I0819 04:20:36.054716    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce491870b40f"
	I0819 04:20:36.067161    4093 logs.go:123] Gathering logs for Docker ...
	I0819 04:20:36.067172    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:20:36.090777    4093 logs.go:123] Gathering logs for kube-apiserver [b3b1f57bf431] ...
	I0819 04:20:36.090784    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3b1f57bf431"
	I0819 04:20:36.134624    4093 logs.go:123] Gathering logs for coredns [7bd1561a8a6f] ...
	I0819 04:20:36.134642    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bd1561a8a6f"
	I0819 04:20:36.146380    4093 logs.go:123] Gathering logs for kube-scheduler [6add09fad9b2] ...
	I0819 04:20:36.146395    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6add09fad9b2"
	I0819 04:20:36.168040    4093 logs.go:123] Gathering logs for storage-provisioner [3e4479afe33e] ...
	I0819 04:20:36.168051    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e4479afe33e"
	I0819 04:20:36.178966    4093 logs.go:123] Gathering logs for kubelet ...
	I0819 04:20:36.178976    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:20:36.217984    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:20:36.218078    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:20:36.218662    4093 logs.go:123] Gathering logs for etcd [be42f13859d1] ...
	I0819 04:20:36.218668    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be42f13859d1"
	I0819 04:20:36.236425    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:20:36.236434    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:20:36.236459    4093 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0819 04:20:36.236464    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	  Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:20:36.236468    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	  Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:20:36.236472    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:20:36.236475    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:20:46.240509    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:20:51.242816    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:20:51.242925    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:20:51.254220    4093 logs.go:276] 2 containers: [857a1390fd04 b3b1f57bf431]
	I0819 04:20:51.254286    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:20:51.265023    4093 logs.go:276] 2 containers: [be42f13859d1 672093e300cc]
	I0819 04:20:51.265106    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:20:51.275204    4093 logs.go:276] 1 containers: [7bd1561a8a6f]
	I0819 04:20:51.275280    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:20:51.286135    4093 logs.go:276] 2 containers: [d95ed659ab7f 6add09fad9b2]
	I0819 04:20:51.286212    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:20:51.296187    4093 logs.go:276] 1 containers: [bc99c20c6575]
	I0819 04:20:51.296256    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:20:51.306757    4093 logs.go:276] 2 containers: [c08aada44f32 ce491870b40f]
	I0819 04:20:51.306844    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:20:51.316918    4093 logs.go:276] 0 containers: []
	W0819 04:20:51.316932    4093 logs.go:278] No container was found matching "kindnet"
	I0819 04:20:51.317004    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:20:51.326864    4093 logs.go:276] 2 containers: [3e4479afe33e 343dec6784e0]
	I0819 04:20:51.326881    4093 logs.go:123] Gathering logs for kube-apiserver [b3b1f57bf431] ...
	I0819 04:20:51.326887    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3b1f57bf431"
	I0819 04:20:51.367266    4093 logs.go:123] Gathering logs for etcd [672093e300cc] ...
	I0819 04:20:51.367276    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 672093e300cc"
	I0819 04:20:51.382356    4093 logs.go:123] Gathering logs for kube-controller-manager [c08aada44f32] ...
	I0819 04:20:51.382370    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08aada44f32"
	I0819 04:20:51.400378    4093 logs.go:123] Gathering logs for storage-provisioner [343dec6784e0] ...
	I0819 04:20:51.400390    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 343dec6784e0"
	I0819 04:20:51.413072    4093 logs.go:123] Gathering logs for container status ...
	I0819 04:20:51.413086    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:20:51.425645    4093 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:20:51.425659    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:20:51.460110    4093 logs.go:123] Gathering logs for kube-apiserver [857a1390fd04] ...
	I0819 04:20:51.460121    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857a1390fd04"
	I0819 04:20:51.474323    4093 logs.go:123] Gathering logs for coredns [7bd1561a8a6f] ...
	I0819 04:20:51.474334    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bd1561a8a6f"
	I0819 04:20:51.486517    4093 logs.go:123] Gathering logs for kubelet ...
	I0819 04:20:51.486529    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:20:51.522644    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:20:51.522738    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:20:51.523307    4093 logs.go:123] Gathering logs for kube-controller-manager [ce491870b40f] ...
	I0819 04:20:51.523314    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce491870b40f"
	I0819 04:20:51.535879    4093 logs.go:123] Gathering logs for storage-provisioner [3e4479afe33e] ...
	I0819 04:20:51.535891    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e4479afe33e"
	I0819 04:20:51.551463    4093 logs.go:123] Gathering logs for Docker ...
	I0819 04:20:51.551474    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:20:51.575611    4093 logs.go:123] Gathering logs for dmesg ...
	I0819 04:20:51.575620    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:20:51.579772    4093 logs.go:123] Gathering logs for etcd [be42f13859d1] ...
	I0819 04:20:51.579778    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be42f13859d1"
	I0819 04:20:51.593135    4093 logs.go:123] Gathering logs for kube-scheduler [d95ed659ab7f] ...
	I0819 04:20:51.593144    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d95ed659ab7f"
	I0819 04:20:51.604882    4093 logs.go:123] Gathering logs for kube-scheduler [6add09fad9b2] ...
	I0819 04:20:51.604893    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6add09fad9b2"
	I0819 04:20:51.625595    4093 logs.go:123] Gathering logs for kube-proxy [bc99c20c6575] ...
	I0819 04:20:51.625609    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc99c20c6575"
	I0819 04:20:51.648378    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:20:51.648391    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:20:51.648421    4093 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0819 04:20:51.648427    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	  Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:20:51.648431    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	  Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:20:51.648436    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:20:51.648446    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:21:01.650823    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:21:06.653028    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:21:06.653371    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:21:06.677405    4093 logs.go:276] 2 containers: [857a1390fd04 b3b1f57bf431]
	I0819 04:21:06.677531    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:21:06.693607    4093 logs.go:276] 2 containers: [be42f13859d1 672093e300cc]
	I0819 04:21:06.693699    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:21:06.706488    4093 logs.go:276] 1 containers: [7bd1561a8a6f]
	I0819 04:21:06.706564    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:21:06.717397    4093 logs.go:276] 2 containers: [d95ed659ab7f 6add09fad9b2]
	I0819 04:21:06.717470    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:21:06.728056    4093 logs.go:276] 1 containers: [bc99c20c6575]
	I0819 04:21:06.728129    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:21:06.738729    4093 logs.go:276] 2 containers: [c08aada44f32 ce491870b40f]
	I0819 04:21:06.738801    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:21:06.748830    4093 logs.go:276] 0 containers: []
	W0819 04:21:06.748842    4093 logs.go:278] No container was found matching "kindnet"
	I0819 04:21:06.748906    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:21:06.759227    4093 logs.go:276] 2 containers: [3e4479afe33e 343dec6784e0]
	I0819 04:21:06.759243    4093 logs.go:123] Gathering logs for kube-proxy [bc99c20c6575] ...
	I0819 04:21:06.759249    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc99c20c6575"
	I0819 04:21:06.771128    4093 logs.go:123] Gathering logs for kube-controller-manager [ce491870b40f] ...
	I0819 04:21:06.771139    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce491870b40f"
	I0819 04:21:06.784011    4093 logs.go:123] Gathering logs for storage-provisioner [3e4479afe33e] ...
	I0819 04:21:06.784020    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e4479afe33e"
	I0819 04:21:06.796016    4093 logs.go:123] Gathering logs for container status ...
	I0819 04:21:06.796028    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:21:06.808038    4093 logs.go:123] Gathering logs for kube-apiserver [b3b1f57bf431] ...
	I0819 04:21:06.808049    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3b1f57bf431"
	I0819 04:21:06.848961    4093 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:21:06.848970    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:21:06.884344    4093 logs.go:123] Gathering logs for etcd [be42f13859d1] ...
	I0819 04:21:06.884358    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be42f13859d1"
	I0819 04:21:06.899760    4093 logs.go:123] Gathering logs for kube-scheduler [6add09fad9b2] ...
	I0819 04:21:06.899771    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6add09fad9b2"
	I0819 04:21:06.921485    4093 logs.go:123] Gathering logs for storage-provisioner [343dec6784e0] ...
	I0819 04:21:06.921495    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 343dec6784e0"
	I0819 04:21:06.932676    4093 logs.go:123] Gathering logs for Docker ...
	I0819 04:21:06.932688    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:21:06.955777    4093 logs.go:123] Gathering logs for dmesg ...
	I0819 04:21:06.955783    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:21:06.959693    4093 logs.go:123] Gathering logs for etcd [672093e300cc] ...
	I0819 04:21:06.959699    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 672093e300cc"
	I0819 04:21:06.973772    4093 logs.go:123] Gathering logs for kubelet ...
	I0819 04:21:06.973783    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:21:07.010388    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:21:07.010479    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:21:07.011044    4093 logs.go:123] Gathering logs for coredns [7bd1561a8a6f] ...
	I0819 04:21:07.011048    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bd1561a8a6f"
	I0819 04:21:07.022209    4093 logs.go:123] Gathering logs for kube-scheduler [d95ed659ab7f] ...
	I0819 04:21:07.022221    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d95ed659ab7f"
	I0819 04:21:07.034151    4093 logs.go:123] Gathering logs for kube-controller-manager [c08aada44f32] ...
	I0819 04:21:07.034164    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08aada44f32"
	I0819 04:21:07.051420    4093 logs.go:123] Gathering logs for kube-apiserver [857a1390fd04] ...
	I0819 04:21:07.051434    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857a1390fd04"
	I0819 04:21:07.083376    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:21:07.083390    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:21:07.083421    4093 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0819 04:21:07.083427    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	  Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:21:07.083442    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	  Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:21:07.083446    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:21:07.083449    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:21:17.087603    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:21:22.090087    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:21:22.090213    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:21:22.101957    4093 logs.go:276] 2 containers: [857a1390fd04 b3b1f57bf431]
	I0819 04:21:22.102035    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:21:22.113177    4093 logs.go:276] 2 containers: [be42f13859d1 672093e300cc]
	I0819 04:21:22.113243    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:21:22.123701    4093 logs.go:276] 1 containers: [7bd1561a8a6f]
	I0819 04:21:22.123771    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:21:22.133882    4093 logs.go:276] 2 containers: [d95ed659ab7f 6add09fad9b2]
	I0819 04:21:22.133946    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:21:22.144551    4093 logs.go:276] 1 containers: [bc99c20c6575]
	I0819 04:21:22.144612    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:21:22.155108    4093 logs.go:276] 2 containers: [c08aada44f32 ce491870b40f]
	I0819 04:21:22.155178    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:21:22.165309    4093 logs.go:276] 0 containers: []
	W0819 04:21:22.165320    4093 logs.go:278] No container was found matching "kindnet"
	I0819 04:21:22.165379    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:21:22.175936    4093 logs.go:276] 2 containers: [3e4479afe33e 343dec6784e0]
	I0819 04:21:22.175953    4093 logs.go:123] Gathering logs for kube-apiserver [b3b1f57bf431] ...
	I0819 04:21:22.175958    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3b1f57bf431"
	I0819 04:21:22.212730    4093 logs.go:123] Gathering logs for kube-proxy [bc99c20c6575] ...
	I0819 04:21:22.212740    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc99c20c6575"
	I0819 04:21:22.224689    4093 logs.go:123] Gathering logs for kube-controller-manager [ce491870b40f] ...
	I0819 04:21:22.224697    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce491870b40f"
	I0819 04:21:22.241755    4093 logs.go:123] Gathering logs for kubelet ...
	I0819 04:21:22.241765    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:21:22.280400    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:21:22.280493    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:21:22.281093    4093 logs.go:123] Gathering logs for etcd [be42f13859d1] ...
	I0819 04:21:22.281102    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be42f13859d1"
	I0819 04:21:22.298846    4093 logs.go:123] Gathering logs for etcd [672093e300cc] ...
	I0819 04:21:22.298856    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 672093e300cc"
	I0819 04:21:22.313397    4093 logs.go:123] Gathering logs for container status ...
	I0819 04:21:22.313408    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:21:22.326299    4093 logs.go:123] Gathering logs for dmesg ...
	I0819 04:21:22.326309    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:21:22.330445    4093 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:21:22.330453    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:21:22.366201    4093 logs.go:123] Gathering logs for kube-scheduler [d95ed659ab7f] ...
	I0819 04:21:22.366213    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d95ed659ab7f"
	I0819 04:21:22.378451    4093 logs.go:123] Gathering logs for kube-scheduler [6add09fad9b2] ...
	I0819 04:21:22.378461    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6add09fad9b2"
	I0819 04:21:22.401063    4093 logs.go:123] Gathering logs for kube-controller-manager [c08aada44f32] ...
	I0819 04:21:22.401076    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08aada44f32"
	I0819 04:21:22.419661    4093 logs.go:123] Gathering logs for kube-apiserver [857a1390fd04] ...
	I0819 04:21:22.419671    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857a1390fd04"
	I0819 04:21:22.434663    4093 logs.go:123] Gathering logs for coredns [7bd1561a8a6f] ...
	I0819 04:21:22.434673    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bd1561a8a6f"
	I0819 04:21:22.445853    4093 logs.go:123] Gathering logs for storage-provisioner [3e4479afe33e] ...
	I0819 04:21:22.445863    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e4479afe33e"
	I0819 04:21:22.460496    4093 logs.go:123] Gathering logs for storage-provisioner [343dec6784e0] ...
	I0819 04:21:22.460508    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 343dec6784e0"
	I0819 04:21:22.472494    4093 logs.go:123] Gathering logs for Docker ...
	I0819 04:21:22.472505    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:21:22.494901    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:21:22.494910    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:21:22.494935    4093 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0819 04:21:22.494940    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	  Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:21:22.494963    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	  Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:21:22.494968    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:21:22.494971    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:21:32.498969    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:21:37.501357    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:21:37.501434    4093 kubeadm.go:597] duration metric: took 4m6.983286959s to restartPrimaryControlPlane
	W0819 04:21:37.501508    4093 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 04:21:37.501546    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0819 04:21:38.514597    4093 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.013045208s)
	I0819 04:21:38.514687    4093 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 04:21:38.519551    4093 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 04:21:38.522372    4093 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 04:21:38.525165    4093 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 04:21:38.525171    4093 kubeadm.go:157] found existing configuration files:
	
	I0819 04:21:38.525193    4093 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50464 /etc/kubernetes/admin.conf
	I0819 04:21:38.528068    4093 kubeadm.go:163] "https://control-plane.minikube.internal:50464" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50464 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 04:21:38.528087    4093 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 04:21:38.530512    4093 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50464 /etc/kubernetes/kubelet.conf
	I0819 04:21:38.533339    4093 kubeadm.go:163] "https://control-plane.minikube.internal:50464" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50464 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 04:21:38.533367    4093 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 04:21:38.536430    4093 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50464 /etc/kubernetes/controller-manager.conf
	I0819 04:21:38.539202    4093 kubeadm.go:163] "https://control-plane.minikube.internal:50464" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50464 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 04:21:38.539223    4093 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 04:21:38.541694    4093 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50464 /etc/kubernetes/scheduler.conf
	I0819 04:21:38.544650    4093 kubeadm.go:163] "https://control-plane.minikube.internal:50464" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50464 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 04:21:38.544673    4093 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 04:21:38.547282    4093 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 04:21:38.562244    4093 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0819 04:21:38.562271    4093 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 04:21:38.613304    4093 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 04:21:38.613391    4093 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 04:21:38.613474    4093 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 04:21:38.662332    4093 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 04:21:38.666592    4093 out.go:235]   - Generating certificates and keys ...
	I0819 04:21:38.666624    4093 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 04:21:38.666663    4093 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 04:21:38.666710    4093 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 04:21:38.666740    4093 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 04:21:38.666773    4093 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 04:21:38.666802    4093 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 04:21:38.666841    4093 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 04:21:38.666880    4093 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 04:21:38.666927    4093 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 04:21:38.666968    4093 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 04:21:38.666990    4093 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 04:21:38.667020    4093 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 04:21:38.816464    4093 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 04:21:38.972449    4093 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 04:21:39.081369    4093 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 04:21:39.227639    4093 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 04:21:39.258109    4093 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 04:21:39.258623    4093 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 04:21:39.258751    4093 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 04:21:39.326946    4093 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 04:21:39.330141    4093 out.go:235]   - Booting up control plane ...
	I0819 04:21:39.330188    4093 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 04:21:39.330235    4093 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 04:21:39.330274    4093 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 04:21:39.330315    4093 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 04:21:39.330392    4093 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 04:21:43.830921    4093 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501285 seconds
	I0819 04:21:43.830979    4093 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 04:21:43.835015    4093 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 04:21:44.342378    4093 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 04:21:44.342552    4093 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-446000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 04:21:44.846670    4093 kubeadm.go:310] [bootstrap-token] Using token: p7y4ix.t1jkzzhb876hyy9j
	I0819 04:21:44.849804    4093 out.go:235]   - Configuring RBAC rules ...
	I0819 04:21:44.849870    4093 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 04:21:44.849920    4093 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 04:21:44.854397    4093 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 04:21:44.855501    4093 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 04:21:44.856390    4093 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 04:21:44.857284    4093 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 04:21:44.860784    4093 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 04:21:45.029070    4093 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 04:21:45.250655    4093 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 04:21:45.251108    4093 kubeadm.go:310] 
	I0819 04:21:45.251138    4093 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 04:21:45.251142    4093 kubeadm.go:310] 
	I0819 04:21:45.251177    4093 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 04:21:45.251183    4093 kubeadm.go:310] 
	I0819 04:21:45.251203    4093 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 04:21:45.251237    4093 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 04:21:45.251271    4093 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 04:21:45.251277    4093 kubeadm.go:310] 
	I0819 04:21:45.251310    4093 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 04:21:45.251315    4093 kubeadm.go:310] 
	I0819 04:21:45.251343    4093 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 04:21:45.251346    4093 kubeadm.go:310] 
	I0819 04:21:45.251376    4093 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 04:21:45.251423    4093 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 04:21:45.251463    4093 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 04:21:45.251466    4093 kubeadm.go:310] 
	I0819 04:21:45.251509    4093 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 04:21:45.251554    4093 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 04:21:45.251558    4093 kubeadm.go:310] 
	I0819 04:21:45.251606    4093 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token p7y4ix.t1jkzzhb876hyy9j \
	I0819 04:21:45.251663    4093 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:200cf9aaf4d8090b061170c9280858f68184aa10356c82792dd3b43229196e5e \
	I0819 04:21:45.251676    4093 kubeadm.go:310] 	--control-plane 
	I0819 04:21:45.251681    4093 kubeadm.go:310] 
	I0819 04:21:45.251727    4093 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 04:21:45.251732    4093 kubeadm.go:310] 
	I0819 04:21:45.251780    4093 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token p7y4ix.t1jkzzhb876hyy9j \
	I0819 04:21:45.251828    4093 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:200cf9aaf4d8090b061170c9280858f68184aa10356c82792dd3b43229196e5e 
	I0819 04:21:45.251923    4093 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 04:21:45.251939    4093 cni.go:84] Creating CNI manager for ""
	I0819 04:21:45.251947    4093 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:21:45.254799    4093 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 04:21:45.261749    4093 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 04:21:45.264848    4093 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 04:21:45.269755    4093 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 04:21:45.269807    4093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 04:21:45.269833    4093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-446000 minikube.k8s.io/updated_at=2024_08_19T04_21_45_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=7871dd89d2a8218fd3bbcc542b116f963c0d9934 minikube.k8s.io/name=stopped-upgrade-446000 minikube.k8s.io/primary=true
	I0819 04:21:45.273031    4093 ops.go:34] apiserver oom_adj: -16
	I0819 04:21:45.311019    4093 kubeadm.go:1113] duration metric: took 41.246917ms to wait for elevateKubeSystemPrivileges
	I0819 04:21:45.311035    4093 kubeadm.go:394] duration metric: took 4m14.806381792s to StartCluster
	I0819 04:21:45.311046    4093 settings.go:142] acquiring lock: {Name:mkadddaa5ec690138051e9a9334213fba69e0867 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:21:45.311165    4093 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19476-967/kubeconfig
	I0819 04:21:45.311602    4093 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19476-967/kubeconfig: {Name:mkcc8b27cbda2ef567c4911aa335c1e1951a7d2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:21:45.311831    4093 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:21:45.311869    4093 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 04:21:45.311913    4093 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-446000"
	I0819 04:21:45.311921    4093 config.go:182] Loaded profile config "stopped-upgrade-446000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 04:21:45.311925    4093 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-446000"
	W0819 04:21:45.311928    4093 addons.go:243] addon storage-provisioner should already be in state true
	I0819 04:21:45.311937    4093 host.go:66] Checking if "stopped-upgrade-446000" exists ...
	I0819 04:21:45.311950    4093 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-446000"
	I0819 04:21:45.311963    4093 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-446000"
	I0819 04:21:45.312178    4093 retry.go:31] will retry after 1.410262221s: connect: dial unix /Users/jenkins/minikube-integration/19476-967/.minikube/machines/stopped-upgrade-446000/monitor: connect: connection refused
	I0819 04:21:45.312890    4093 kapi.go:59] client config for stopped-upgrade-446000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19476-967/.minikube/profiles/stopped-upgrade-446000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19476-967/.minikube/profiles/stopped-upgrade-446000/client.key", CAFile:"/Users/jenkins/minikube-integration/19476-967/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102335610), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 04:21:45.313003    4093 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-446000"
	W0819 04:21:45.313008    4093 addons.go:243] addon default-storageclass should already be in state true
	I0819 04:21:45.313016    4093 host.go:66] Checking if "stopped-upgrade-446000" exists ...
	I0819 04:21:45.313528    4093 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 04:21:45.313532    4093 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 04:21:45.313537    4093 sshutil.go:53] new ssh client: &{IP:localhost Port:50429 SSHKeyPath:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/stopped-upgrade-446000/id_rsa Username:docker}
	I0819 04:21:45.315747    4093 out.go:177] * Verifying Kubernetes components...
	I0819 04:21:45.322805    4093 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 04:21:45.391437    4093 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 04:21:45.396536    4093 api_server.go:52] waiting for apiserver process to appear ...
	I0819 04:21:45.396581    4093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 04:21:45.400435    4093 api_server.go:72] duration metric: took 88.595625ms to wait for apiserver process to appear ...
	I0819 04:21:45.400444    4093 api_server.go:88] waiting for apiserver healthz status ...
	I0819 04:21:45.400450    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:21:45.405756    4093 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 04:21:45.729111    4093 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0819 04:21:45.729127    4093 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0819 04:21:46.730287    4093 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 04:21:46.734219    4093 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 04:21:46.734226    4093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 04:21:46.734234    4093 sshutil.go:53] new ssh client: &{IP:localhost Port:50429 SSHKeyPath:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/stopped-upgrade-446000/id_rsa Username:docker}
	I0819 04:21:46.770238    4093 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 04:21:50.401454    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:21:50.401498    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:21:55.402446    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:21:55.402521    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:22:00.402728    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:22:00.402751    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:22:05.402983    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:22:05.403007    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:22:10.403370    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:22:10.403406    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:22:15.403851    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:22:15.403871    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0819 04:22:15.731057    4093 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0819 04:22:15.736748    4093 out.go:177] * Enabled addons: storage-provisioner
	I0819 04:22:15.744745    4093 addons.go:510] duration metric: took 30.433260625s for enable addons: enabled=[storage-provisioner]
	I0819 04:22:20.404706    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:22:20.404745    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:22:25.405903    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:22:25.405951    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:22:30.407507    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:22:30.407564    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:22:35.409129    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:22:35.409152    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:22:40.411158    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:22:40.411181    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:22:45.413298    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:22:45.413411    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:22:45.424518    4093 logs.go:276] 1 containers: [47f9e56baf4e]
	I0819 04:22:45.424593    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:22:45.435030    4093 logs.go:276] 1 containers: [f2b22411f75b]
	I0819 04:22:45.435102    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:22:45.445852    4093 logs.go:276] 2 containers: [23bece56c888 196b61ee06a4]
	I0819 04:22:45.445913    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:22:45.456462    4093 logs.go:276] 1 containers: [5d8eef1a2bec]
	I0819 04:22:45.456529    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:22:45.466879    4093 logs.go:276] 1 containers: [8b8837f8e096]
	I0819 04:22:45.466941    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:22:45.478148    4093 logs.go:276] 1 containers: [ee8bf9db190f]
	I0819 04:22:45.478227    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:22:45.488780    4093 logs.go:276] 0 containers: []
	W0819 04:22:45.488793    4093 logs.go:278] No container was found matching "kindnet"
	I0819 04:22:45.488857    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:22:45.500384    4093 logs.go:276] 1 containers: [f3ca31526ce2]
	I0819 04:22:45.500399    4093 logs.go:123] Gathering logs for kubelet ...
	I0819 04:22:45.500404    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:22:45.517458    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:22:45.517552    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:22:45.533916    4093 logs.go:123] Gathering logs for dmesg ...
	I0819 04:22:45.533924    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:22:45.538085    4093 logs.go:123] Gathering logs for coredns [23bece56c888] ...
	I0819 04:22:45.538095    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bece56c888"
	I0819 04:22:45.549648    4093 logs.go:123] Gathering logs for kube-scheduler [5d8eef1a2bec] ...
	I0819 04:22:45.549658    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d8eef1a2bec"
	I0819 04:22:45.564904    4093 logs.go:123] Gathering logs for storage-provisioner [f3ca31526ce2] ...
	I0819 04:22:45.564914    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca31526ce2"
	I0819 04:22:45.576836    4093 logs.go:123] Gathering logs for container status ...
	I0819 04:22:45.576848    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:22:45.588475    4093 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:22:45.588486    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:22:45.627615    4093 logs.go:123] Gathering logs for kube-apiserver [47f9e56baf4e] ...
	I0819 04:22:45.627626    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47f9e56baf4e"
	I0819 04:22:45.642451    4093 logs.go:123] Gathering logs for etcd [f2b22411f75b] ...
	I0819 04:22:45.642463    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2b22411f75b"
	I0819 04:22:45.657049    4093 logs.go:123] Gathering logs for coredns [196b61ee06a4] ...
	I0819 04:22:45.657059    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196b61ee06a4"
	I0819 04:22:45.668508    4093 logs.go:123] Gathering logs for kube-proxy [8b8837f8e096] ...
	I0819 04:22:45.668518    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b8837f8e096"
	I0819 04:22:45.680806    4093 logs.go:123] Gathering logs for kube-controller-manager [ee8bf9db190f] ...
	I0819 04:22:45.680819    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee8bf9db190f"
	I0819 04:22:45.701958    4093 logs.go:123] Gathering logs for Docker ...
	I0819 04:22:45.701967    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:22:45.726742    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:22:45.726752    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:22:45.726781    4093 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0819 04:22:45.726786    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	  Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:22:45.726789    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	  Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:22:45.726794    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:22:45.726798    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:22:55.728501    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:23:00.730984    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:23:00.731429    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:23:00.769488    4093 logs.go:276] 1 containers: [47f9e56baf4e]
	I0819 04:23:00.769624    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:23:00.790382    4093 logs.go:276] 1 containers: [f2b22411f75b]
	I0819 04:23:00.790478    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:23:00.805195    4093 logs.go:276] 2 containers: [23bece56c888 196b61ee06a4]
	I0819 04:23:00.805271    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:23:00.817669    4093 logs.go:276] 1 containers: [5d8eef1a2bec]
	I0819 04:23:00.817739    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:23:00.830426    4093 logs.go:276] 1 containers: [8b8837f8e096]
	I0819 04:23:00.830502    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:23:00.841542    4093 logs.go:276] 1 containers: [ee8bf9db190f]
	I0819 04:23:00.841607    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:23:00.852143    4093 logs.go:276] 0 containers: []
	W0819 04:23:00.852154    4093 logs.go:278] No container was found matching "kindnet"
	I0819 04:23:00.852213    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:23:00.862580    4093 logs.go:276] 1 containers: [f3ca31526ce2]
	I0819 04:23:00.862594    4093 logs.go:123] Gathering logs for dmesg ...
	I0819 04:23:00.862600    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:23:00.867145    4093 logs.go:123] Gathering logs for coredns [23bece56c888] ...
	I0819 04:23:00.867152    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bece56c888"
	I0819 04:23:00.878843    4093 logs.go:123] Gathering logs for kube-scheduler [5d8eef1a2bec] ...
	I0819 04:23:00.878856    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d8eef1a2bec"
	I0819 04:23:00.894326    4093 logs.go:123] Gathering logs for kube-proxy [8b8837f8e096] ...
	I0819 04:23:00.894336    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b8837f8e096"
	I0819 04:23:00.905534    4093 logs.go:123] Gathering logs for storage-provisioner [f3ca31526ce2] ...
	I0819 04:23:00.905546    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca31526ce2"
	I0819 04:23:00.917367    4093 logs.go:123] Gathering logs for Docker ...
	I0819 04:23:00.917375    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:23:00.941575    4093 logs.go:123] Gathering logs for container status ...
	I0819 04:23:00.941583    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:23:00.953471    4093 logs.go:123] Gathering logs for kubelet ...
	I0819 04:23:00.953483    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:23:00.969677    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:23:00.969768    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:23:00.986063    4093 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:23:00.986070    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:23:01.022168    4093 logs.go:123] Gathering logs for kube-apiserver [47f9e56baf4e] ...
	I0819 04:23:01.022180    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47f9e56baf4e"
	I0819 04:23:01.036362    4093 logs.go:123] Gathering logs for etcd [f2b22411f75b] ...
	I0819 04:23:01.036373    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2b22411f75b"
	I0819 04:23:01.050231    4093 logs.go:123] Gathering logs for coredns [196b61ee06a4] ...
	I0819 04:23:01.050242    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196b61ee06a4"
	I0819 04:23:01.061487    4093 logs.go:123] Gathering logs for kube-controller-manager [ee8bf9db190f] ...
	I0819 04:23:01.061495    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee8bf9db190f"
	I0819 04:23:01.078702    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:23:01.078712    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:23:01.078738    4093 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0819 04:23:01.078744    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	  Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:23:01.078747    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	  Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:23:01.078751    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:23:01.078755    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:23:11.081710    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:23:16.083050    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:23:16.083499    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:23:16.123771    4093 logs.go:276] 1 containers: [47f9e56baf4e]
	I0819 04:23:16.123898    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:23:16.147196    4093 logs.go:276] 1 containers: [f2b22411f75b]
	I0819 04:23:16.147315    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:23:16.163427    4093 logs.go:276] 2 containers: [23bece56c888 196b61ee06a4]
	I0819 04:23:16.163498    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:23:16.176101    4093 logs.go:276] 1 containers: [5d8eef1a2bec]
	I0819 04:23:16.176170    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:23:16.186742    4093 logs.go:276] 1 containers: [8b8837f8e096]
	I0819 04:23:16.186815    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:23:16.197449    4093 logs.go:276] 1 containers: [ee8bf9db190f]
	I0819 04:23:16.197517    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:23:16.207660    4093 logs.go:276] 0 containers: []
	W0819 04:23:16.207675    4093 logs.go:278] No container was found matching "kindnet"
	I0819 04:23:16.207744    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:23:16.221844    4093 logs.go:276] 1 containers: [f3ca31526ce2]
	I0819 04:23:16.221858    4093 logs.go:123] Gathering logs for dmesg ...
	I0819 04:23:16.221863    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:23:16.226065    4093 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:23:16.226075    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:23:16.264852    4093 logs.go:123] Gathering logs for coredns [196b61ee06a4] ...
	I0819 04:23:16.264865    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196b61ee06a4"
	I0819 04:23:16.277075    4093 logs.go:123] Gathering logs for kube-scheduler [5d8eef1a2bec] ...
	I0819 04:23:16.277088    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d8eef1a2bec"
	I0819 04:23:16.291746    4093 logs.go:123] Gathering logs for container status ...
	I0819 04:23:16.291756    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:23:16.304225    4093 logs.go:123] Gathering logs for kubelet ...
	I0819 04:23:16.304238    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:23:16.321595    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:23:16.321688    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:23:16.338106    4093 logs.go:123] Gathering logs for kube-apiserver [47f9e56baf4e] ...
	I0819 04:23:16.338113    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47f9e56baf4e"
	I0819 04:23:16.352090    4093 logs.go:123] Gathering logs for etcd [f2b22411f75b] ...
	I0819 04:23:16.352102    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2b22411f75b"
	I0819 04:23:16.365778    4093 logs.go:123] Gathering logs for coredns [23bece56c888] ...
	I0819 04:23:16.365787    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bece56c888"
	I0819 04:23:16.377324    4093 logs.go:123] Gathering logs for kube-proxy [8b8837f8e096] ...
	I0819 04:23:16.377338    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b8837f8e096"
	I0819 04:23:16.388211    4093 logs.go:123] Gathering logs for kube-controller-manager [ee8bf9db190f] ...
	I0819 04:23:16.388223    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee8bf9db190f"
	I0819 04:23:16.405729    4093 logs.go:123] Gathering logs for storage-provisioner [f3ca31526ce2] ...
	I0819 04:23:16.405741    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca31526ce2"
	I0819 04:23:16.416948    4093 logs.go:123] Gathering logs for Docker ...
	I0819 04:23:16.416957    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:23:16.440280    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:23:16.440287    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:23:16.440309    4093 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0819 04:23:16.440313    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	  Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:23:16.440317    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	  Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:23:16.440321    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:23:16.440323    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:23:26.443235    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:23:31.445913    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:23:31.446238    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:23:31.497280    4093 logs.go:276] 1 containers: [47f9e56baf4e]
	I0819 04:23:31.497405    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:23:31.515841    4093 logs.go:276] 1 containers: [f2b22411f75b]
	I0819 04:23:31.515935    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:23:31.531775    4093 logs.go:276] 2 containers: [23bece56c888 196b61ee06a4]
	I0819 04:23:31.531855    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:23:31.543752    4093 logs.go:276] 1 containers: [5d8eef1a2bec]
	I0819 04:23:31.543817    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:23:31.554389    4093 logs.go:276] 1 containers: [8b8837f8e096]
	I0819 04:23:31.554449    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:23:31.565132    4093 logs.go:276] 1 containers: [ee8bf9db190f]
	I0819 04:23:31.565196    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:23:31.575671    4093 logs.go:276] 0 containers: []
	W0819 04:23:31.575683    4093 logs.go:278] No container was found matching "kindnet"
	I0819 04:23:31.575744    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:23:31.586231    4093 logs.go:276] 1 containers: [f3ca31526ce2]
	I0819 04:23:31.586247    4093 logs.go:123] Gathering logs for kube-controller-manager [ee8bf9db190f] ...
	I0819 04:23:31.586252    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee8bf9db190f"
	I0819 04:23:31.604197    4093 logs.go:123] Gathering logs for storage-provisioner [f3ca31526ce2] ...
	I0819 04:23:31.604207    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca31526ce2"
	I0819 04:23:31.615872    4093 logs.go:123] Gathering logs for dmesg ...
	I0819 04:23:31.615885    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:23:31.620101    4093 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:23:31.620111    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:23:31.656158    4093 logs.go:123] Gathering logs for kube-scheduler [5d8eef1a2bec] ...
	I0819 04:23:31.656168    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d8eef1a2bec"
	I0819 04:23:31.671535    4093 logs.go:123] Gathering logs for kube-proxy [8b8837f8e096] ...
	I0819 04:23:31.671547    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b8837f8e096"
	I0819 04:23:31.683121    4093 logs.go:123] Gathering logs for coredns [196b61ee06a4] ...
	I0819 04:23:31.683133    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196b61ee06a4"
	I0819 04:23:31.695119    4093 logs.go:123] Gathering logs for Docker ...
	I0819 04:23:31.695132    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:23:31.718214    4093 logs.go:123] Gathering logs for container status ...
	I0819 04:23:31.718222    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:23:31.729583    4093 logs.go:123] Gathering logs for kubelet ...
	I0819 04:23:31.729596    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:23:31.746474    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:23:31.746567    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:23:31.763370    4093 logs.go:123] Gathering logs for kube-apiserver [47f9e56baf4e] ...
	I0819 04:23:31.763375    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47f9e56baf4e"
	I0819 04:23:31.777419    4093 logs.go:123] Gathering logs for etcd [f2b22411f75b] ...
	I0819 04:23:31.777432    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2b22411f75b"
	I0819 04:23:31.791506    4093 logs.go:123] Gathering logs for coredns [23bece56c888] ...
	I0819 04:23:31.791517    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bece56c888"
	I0819 04:23:31.804970    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:23:31.804983    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:23:31.805022    4093 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0819 04:23:31.805027    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	  Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:23:31.805029    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	  Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:23:31.805033    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:23:31.805036    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:23:41.809173    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:23:46.811884    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:23:46.812224    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:23:46.854359    4093 logs.go:276] 1 containers: [47f9e56baf4e]
	I0819 04:23:46.854493    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:23:46.875568    4093 logs.go:276] 1 containers: [f2b22411f75b]
	I0819 04:23:46.875691    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:23:46.891704    4093 logs.go:276] 2 containers: [23bece56c888 196b61ee06a4]
	I0819 04:23:46.891786    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:23:46.904349    4093 logs.go:276] 1 containers: [5d8eef1a2bec]
	I0819 04:23:46.904411    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:23:46.914865    4093 logs.go:276] 1 containers: [8b8837f8e096]
	I0819 04:23:46.914924    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:23:46.925549    4093 logs.go:276] 1 containers: [ee8bf9db190f]
	I0819 04:23:46.925619    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:23:46.936218    4093 logs.go:276] 0 containers: []
	W0819 04:23:46.936229    4093 logs.go:278] No container was found matching "kindnet"
	I0819 04:23:46.936290    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:23:46.946722    4093 logs.go:276] 1 containers: [f3ca31526ce2]
	I0819 04:23:46.946738    4093 logs.go:123] Gathering logs for coredns [196b61ee06a4] ...
	I0819 04:23:46.946744    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196b61ee06a4"
	I0819 04:23:46.958842    4093 logs.go:123] Gathering logs for kube-scheduler [5d8eef1a2bec] ...
	I0819 04:23:46.958854    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d8eef1a2bec"
	I0819 04:23:46.973509    4093 logs.go:123] Gathering logs for kube-proxy [8b8837f8e096] ...
	I0819 04:23:46.973519    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b8837f8e096"
	I0819 04:23:46.985201    4093 logs.go:123] Gathering logs for kubelet ...
	I0819 04:23:46.985215    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:23:47.005913    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:23:47.006007    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:23:47.022418    4093 logs.go:123] Gathering logs for dmesg ...
	I0819 04:23:47.022424    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:23:47.026868    4093 logs.go:123] Gathering logs for etcd [f2b22411f75b] ...
	I0819 04:23:47.026877    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2b22411f75b"
	I0819 04:23:47.040779    4093 logs.go:123] Gathering logs for coredns [23bece56c888] ...
	I0819 04:23:47.040790    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bece56c888"
	I0819 04:23:47.052114    4093 logs.go:123] Gathering logs for Docker ...
	I0819 04:23:47.052126    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:23:47.075017    4093 logs.go:123] Gathering logs for container status ...
	I0819 04:23:47.075024    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:23:47.096682    4093 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:23:47.096694    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:23:47.136895    4093 logs.go:123] Gathering logs for kube-apiserver [47f9e56baf4e] ...
	I0819 04:23:47.136905    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47f9e56baf4e"
	I0819 04:23:47.158546    4093 logs.go:123] Gathering logs for kube-controller-manager [ee8bf9db190f] ...
	I0819 04:23:47.158556    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee8bf9db190f"
	I0819 04:23:47.183492    4093 logs.go:123] Gathering logs for storage-provisioner [f3ca31526ce2] ...
	I0819 04:23:47.183503    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca31526ce2"
	I0819 04:23:47.195352    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:23:47.195364    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:23:47.195393    4093 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0819 04:23:47.195397    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	  Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:23:47.195401    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	  Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:23:47.195405    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:23:47.195408    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:23:57.198289    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:24:02.201206    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:24:02.201673    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:24:02.241940    4093 logs.go:276] 1 containers: [47f9e56baf4e]
	I0819 04:24:02.242071    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:24:02.270427    4093 logs.go:276] 1 containers: [f2b22411f75b]
	I0819 04:24:02.270506    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:24:02.284862    4093 logs.go:276] 4 containers: [ef558c9a6de6 20ce9e060b4f 23bece56c888 196b61ee06a4]
	I0819 04:24:02.284933    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:24:02.297622    4093 logs.go:276] 1 containers: [5d8eef1a2bec]
	I0819 04:24:02.297689    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:24:02.308070    4093 logs.go:276] 1 containers: [8b8837f8e096]
	I0819 04:24:02.308136    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:24:02.319203    4093 logs.go:276] 1 containers: [ee8bf9db190f]
	I0819 04:24:02.319270    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:24:02.329767    4093 logs.go:276] 0 containers: []
	W0819 04:24:02.329779    4093 logs.go:278] No container was found matching "kindnet"
	I0819 04:24:02.329829    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:24:02.340237    4093 logs.go:276] 1 containers: [f3ca31526ce2]
	I0819 04:24:02.340257    4093 logs.go:123] Gathering logs for dmesg ...
	I0819 04:24:02.340263    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:24:02.344834    4093 logs.go:123] Gathering logs for kube-apiserver [47f9e56baf4e] ...
	I0819 04:24:02.344841    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47f9e56baf4e"
	I0819 04:24:02.358788    4093 logs.go:123] Gathering logs for coredns [20ce9e060b4f] ...
	I0819 04:24:02.358801    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20ce9e060b4f"
	I0819 04:24:02.370368    4093 logs.go:123] Gathering logs for coredns [23bece56c888] ...
	I0819 04:24:02.370378    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bece56c888"
	I0819 04:24:02.381809    4093 logs.go:123] Gathering logs for coredns [196b61ee06a4] ...
	I0819 04:24:02.381820    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196b61ee06a4"
	I0819 04:24:02.393492    4093 logs.go:123] Gathering logs for Docker ...
	I0819 04:24:02.393503    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:24:02.417227    4093 logs.go:123] Gathering logs for container status ...
	I0819 04:24:02.417234    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:24:02.428850    4093 logs.go:123] Gathering logs for kubelet ...
	I0819 04:24:02.428862    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:24:02.447056    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:24:02.447146    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:24:02.463466    4093 logs.go:123] Gathering logs for kube-scheduler [5d8eef1a2bec] ...
	I0819 04:24:02.463471    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d8eef1a2bec"
	I0819 04:24:02.477952    4093 logs.go:123] Gathering logs for kube-proxy [8b8837f8e096] ...
	I0819 04:24:02.477961    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b8837f8e096"
	I0819 04:24:02.492273    4093 logs.go:123] Gathering logs for kube-controller-manager [ee8bf9db190f] ...
	I0819 04:24:02.492286    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee8bf9db190f"
	I0819 04:24:02.510376    4093 logs.go:123] Gathering logs for storage-provisioner [f3ca31526ce2] ...
	I0819 04:24:02.510389    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca31526ce2"
	I0819 04:24:02.521719    4093 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:24:02.521732    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:24:02.557530    4093 logs.go:123] Gathering logs for etcd [f2b22411f75b] ...
	I0819 04:24:02.557545    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2b22411f75b"
	I0819 04:24:02.571616    4093 logs.go:123] Gathering logs for coredns [ef558c9a6de6] ...
	I0819 04:24:02.571629    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef558c9a6de6"
	I0819 04:24:02.582858    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:24:02.582868    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:24:02.582895    4093 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0819 04:24:02.582899    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	  Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:24:02.582904    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	  Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:24:02.582907    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:24:02.582910    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:24:12.586980    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:24:17.589782    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:24:17.590246    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:24:17.630847    4093 logs.go:276] 1 containers: [47f9e56baf4e]
	I0819 04:24:17.630986    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:24:17.651695    4093 logs.go:276] 1 containers: [f2b22411f75b]
	I0819 04:24:17.651815    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:24:17.666384    4093 logs.go:276] 4 containers: [ef558c9a6de6 20ce9e060b4f 23bece56c888 196b61ee06a4]
	I0819 04:24:17.666453    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:24:17.679113    4093 logs.go:276] 1 containers: [5d8eef1a2bec]
	I0819 04:24:17.679173    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:24:17.690011    4093 logs.go:276] 1 containers: [8b8837f8e096]
	I0819 04:24:17.690066    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:24:17.702259    4093 logs.go:276] 1 containers: [ee8bf9db190f]
	I0819 04:24:17.702338    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:24:17.713816    4093 logs.go:276] 0 containers: []
	W0819 04:24:17.713831    4093 logs.go:278] No container was found matching "kindnet"
	I0819 04:24:17.713899    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:24:17.726375    4093 logs.go:276] 1 containers: [f3ca31526ce2]
	I0819 04:24:17.726394    4093 logs.go:123] Gathering logs for kubelet ...
	I0819 04:24:17.726399    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:24:17.744390    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:24:17.744491    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:24:17.761960    4093 logs.go:123] Gathering logs for coredns [20ce9e060b4f] ...
	I0819 04:24:17.761983    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20ce9e060b4f"
	I0819 04:24:17.775090    4093 logs.go:123] Gathering logs for storage-provisioner [f3ca31526ce2] ...
	I0819 04:24:17.775101    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca31526ce2"
	I0819 04:24:17.788490    4093 logs.go:123] Gathering logs for container status ...
	I0819 04:24:17.788503    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:24:17.801707    4093 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:24:17.801720    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:24:17.843608    4093 logs.go:123] Gathering logs for Docker ...
	I0819 04:24:17.843617    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:24:17.868056    4093 logs.go:123] Gathering logs for kube-proxy [8b8837f8e096] ...
	I0819 04:24:17.868067    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b8837f8e096"
	I0819 04:24:17.880461    4093 logs.go:123] Gathering logs for dmesg ...
	I0819 04:24:17.880471    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:24:17.884842    4093 logs.go:123] Gathering logs for etcd [f2b22411f75b] ...
	I0819 04:24:17.884849    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2b22411f75b"
	I0819 04:24:17.900587    4093 logs.go:123] Gathering logs for coredns [ef558c9a6de6] ...
	I0819 04:24:17.900599    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef558c9a6de6"
	I0819 04:24:17.911554    4093 logs.go:123] Gathering logs for kube-scheduler [5d8eef1a2bec] ...
	I0819 04:24:17.911565    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d8eef1a2bec"
	I0819 04:24:17.926394    4093 logs.go:123] Gathering logs for kube-apiserver [47f9e56baf4e] ...
	I0819 04:24:17.926403    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47f9e56baf4e"
	I0819 04:24:17.940473    4093 logs.go:123] Gathering logs for coredns [23bece56c888] ...
	I0819 04:24:17.940481    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bece56c888"
	I0819 04:24:17.952186    4093 logs.go:123] Gathering logs for coredns [196b61ee06a4] ...
	I0819 04:24:17.952195    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196b61ee06a4"
	I0819 04:24:17.963662    4093 logs.go:123] Gathering logs for kube-controller-manager [ee8bf9db190f] ...
	I0819 04:24:17.963674    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee8bf9db190f"
	I0819 04:24:17.981500    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:24:17.981515    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:24:17.981544    4093 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0819 04:24:17.981550    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	  Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:24:17.981557    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	  Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:24:17.981560    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:24:17.981562    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:24:27.985642    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:24:32.986379    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:24:32.986461    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:24:32.998660    4093 logs.go:276] 1 containers: [47f9e56baf4e]
	I0819 04:24:32.998709    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:24:33.009230    4093 logs.go:276] 1 containers: [f2b22411f75b]
	I0819 04:24:33.009294    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:24:33.020715    4093 logs.go:276] 4 containers: [ef558c9a6de6 20ce9e060b4f 23bece56c888 196b61ee06a4]
	I0819 04:24:33.020778    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:24:33.036577    4093 logs.go:276] 1 containers: [5d8eef1a2bec]
	I0819 04:24:33.036635    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:24:33.047615    4093 logs.go:276] 1 containers: [8b8837f8e096]
	I0819 04:24:33.047670    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:24:33.058750    4093 logs.go:276] 1 containers: [ee8bf9db190f]
	I0819 04:24:33.058828    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:24:33.069716    4093 logs.go:276] 0 containers: []
	W0819 04:24:33.069727    4093 logs.go:278] No container was found matching "kindnet"
	I0819 04:24:33.069822    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:24:33.080768    4093 logs.go:276] 1 containers: [f3ca31526ce2]
	I0819 04:24:33.080788    4093 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:24:33.080794    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:24:33.120977    4093 logs.go:123] Gathering logs for coredns [196b61ee06a4] ...
	I0819 04:24:33.120988    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196b61ee06a4"
	I0819 04:24:33.133526    4093 logs.go:123] Gathering logs for kube-scheduler [5d8eef1a2bec] ...
	I0819 04:24:33.133539    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d8eef1a2bec"
	I0819 04:24:33.150840    4093 logs.go:123] Gathering logs for kube-proxy [8b8837f8e096] ...
	I0819 04:24:33.150851    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b8837f8e096"
	I0819 04:24:33.168086    4093 logs.go:123] Gathering logs for dmesg ...
	I0819 04:24:33.168099    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:24:33.173090    4093 logs.go:123] Gathering logs for kube-apiserver [47f9e56baf4e] ...
	I0819 04:24:33.173101    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47f9e56baf4e"
	I0819 04:24:33.186722    4093 logs.go:123] Gathering logs for coredns [ef558c9a6de6] ...
	I0819 04:24:33.186734    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef558c9a6de6"
	I0819 04:24:33.200047    4093 logs.go:123] Gathering logs for coredns [20ce9e060b4f] ...
	I0819 04:24:33.200057    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20ce9e060b4f"
	I0819 04:24:33.213770    4093 logs.go:123] Gathering logs for kubelet ...
	I0819 04:24:33.213782    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:24:33.232177    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:24:33.232272    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:24:33.249564    4093 logs.go:123] Gathering logs for etcd [f2b22411f75b] ...
	I0819 04:24:33.249580    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2b22411f75b"
	I0819 04:24:33.265320    4093 logs.go:123] Gathering logs for coredns [23bece56c888] ...
	I0819 04:24:33.265329    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bece56c888"
	I0819 04:24:33.276830    4093 logs.go:123] Gathering logs for kube-controller-manager [ee8bf9db190f] ...
	I0819 04:24:33.276843    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee8bf9db190f"
	I0819 04:24:33.295653    4093 logs.go:123] Gathering logs for storage-provisioner [f3ca31526ce2] ...
	I0819 04:24:33.295665    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca31526ce2"
	I0819 04:24:33.308799    4093 logs.go:123] Gathering logs for container status ...
	I0819 04:24:33.308807    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:24:33.320869    4093 logs.go:123] Gathering logs for Docker ...
	I0819 04:24:33.320880    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:24:33.346732    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:24:33.346743    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:24:33.346770    4093 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0819 04:24:33.346775    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	  Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:24:33.346779    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	  Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:24:33.346783    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:24:33.346786    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:24:43.350303    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:24:48.353085    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:24:48.353418    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:24:48.385131    4093 logs.go:276] 1 containers: [47f9e56baf4e]
	I0819 04:24:48.385279    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:24:48.404192    4093 logs.go:276] 1 containers: [f2b22411f75b]
	I0819 04:24:48.404279    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:24:48.418076    4093 logs.go:276] 4 containers: [ef558c9a6de6 20ce9e060b4f 23bece56c888 196b61ee06a4]
	I0819 04:24:48.418152    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:24:48.430087    4093 logs.go:276] 1 containers: [5d8eef1a2bec]
	I0819 04:24:48.430155    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:24:48.440758    4093 logs.go:276] 1 containers: [8b8837f8e096]
	I0819 04:24:48.440819    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:24:48.450923    4093 logs.go:276] 1 containers: [ee8bf9db190f]
	I0819 04:24:48.450994    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:24:48.460936    4093 logs.go:276] 0 containers: []
	W0819 04:24:48.460948    4093 logs.go:278] No container was found matching "kindnet"
	I0819 04:24:48.460994    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:24:48.470929    4093 logs.go:276] 1 containers: [f3ca31526ce2]
	I0819 04:24:48.470945    4093 logs.go:123] Gathering logs for kubelet ...
	I0819 04:24:48.470950    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:24:48.488485    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:24:48.488577    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:24:48.505042    4093 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:24:48.505049    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:24:48.540509    4093 logs.go:123] Gathering logs for etcd [f2b22411f75b] ...
	I0819 04:24:48.540522    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2b22411f75b"
	I0819 04:24:48.554756    4093 logs.go:123] Gathering logs for coredns [ef558c9a6de6] ...
	I0819 04:24:48.554768    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef558c9a6de6"
	I0819 04:24:48.566693    4093 logs.go:123] Gathering logs for coredns [20ce9e060b4f] ...
	I0819 04:24:48.566704    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20ce9e060b4f"
	I0819 04:24:48.578339    4093 logs.go:123] Gathering logs for coredns [196b61ee06a4] ...
	I0819 04:24:48.578352    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196b61ee06a4"
	I0819 04:24:48.589906    4093 logs.go:123] Gathering logs for dmesg ...
	I0819 04:24:48.589915    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:24:48.594518    4093 logs.go:123] Gathering logs for kube-controller-manager [ee8bf9db190f] ...
	I0819 04:24:48.594525    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee8bf9db190f"
	I0819 04:24:48.612413    4093 logs.go:123] Gathering logs for coredns [23bece56c888] ...
	I0819 04:24:48.612425    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bece56c888"
	I0819 04:24:48.624304    4093 logs.go:123] Gathering logs for Docker ...
	I0819 04:24:48.624313    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:24:48.648099    4093 logs.go:123] Gathering logs for container status ...
	I0819 04:24:48.648106    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:24:48.659792    4093 logs.go:123] Gathering logs for kube-apiserver [47f9e56baf4e] ...
	I0819 04:24:48.659801    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47f9e56baf4e"
	I0819 04:24:48.674916    4093 logs.go:123] Gathering logs for kube-scheduler [5d8eef1a2bec] ...
	I0819 04:24:48.674926    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d8eef1a2bec"
	I0819 04:24:48.689926    4093 logs.go:123] Gathering logs for kube-proxy [8b8837f8e096] ...
	I0819 04:24:48.689936    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b8837f8e096"
	I0819 04:24:48.701892    4093 logs.go:123] Gathering logs for storage-provisioner [f3ca31526ce2] ...
	I0819 04:24:48.701905    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca31526ce2"
	I0819 04:24:48.730477    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:24:48.730489    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:24:48.730515    4093 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0819 04:24:48.730520    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	  Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:24:48.730524    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	  Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:24:48.730528    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:24:48.730531    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:24:58.734511    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:25:03.735058    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:25:03.735510    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:25:03.773014    4093 logs.go:276] 1 containers: [47f9e56baf4e]
	I0819 04:25:03.773148    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:25:03.794740    4093 logs.go:276] 1 containers: [f2b22411f75b]
	I0819 04:25:03.794839    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:25:03.809222    4093 logs.go:276] 4 containers: [ef558c9a6de6 20ce9e060b4f 23bece56c888 196b61ee06a4]
	I0819 04:25:03.809298    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:25:03.821791    4093 logs.go:276] 1 containers: [5d8eef1a2bec]
	I0819 04:25:03.821865    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:25:03.834242    4093 logs.go:276] 1 containers: [8b8837f8e096]
	I0819 04:25:03.834314    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:25:03.848215    4093 logs.go:276] 1 containers: [ee8bf9db190f]
	I0819 04:25:03.848281    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:25:03.859506    4093 logs.go:276] 0 containers: []
	W0819 04:25:03.859517    4093 logs.go:278] No container was found matching "kindnet"
	I0819 04:25:03.859569    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:25:03.872122    4093 logs.go:276] 1 containers: [f3ca31526ce2]
	I0819 04:25:03.872142    4093 logs.go:123] Gathering logs for kube-proxy [8b8837f8e096] ...
	I0819 04:25:03.872147    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b8837f8e096"
	I0819 04:25:03.883860    4093 logs.go:123] Gathering logs for storage-provisioner [f3ca31526ce2] ...
	I0819 04:25:03.883871    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca31526ce2"
	I0819 04:25:03.895431    4093 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:25:03.895443    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:25:03.941853    4093 logs.go:123] Gathering logs for kube-apiserver [47f9e56baf4e] ...
	I0819 04:25:03.941864    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47f9e56baf4e"
	I0819 04:25:03.961171    4093 logs.go:123] Gathering logs for etcd [f2b22411f75b] ...
	I0819 04:25:03.961181    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2b22411f75b"
	I0819 04:25:03.977574    4093 logs.go:123] Gathering logs for coredns [23bece56c888] ...
	I0819 04:25:03.977586    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bece56c888"
	I0819 04:25:03.990834    4093 logs.go:123] Gathering logs for kubelet ...
	I0819 04:25:03.990847    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:25:04.007361    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:25:04.007452    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:25:04.023795    4093 logs.go:123] Gathering logs for dmesg ...
	I0819 04:25:04.023800    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:25:04.027812    4093 logs.go:123] Gathering logs for Docker ...
	I0819 04:25:04.027820    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:25:04.050534    4093 logs.go:123] Gathering logs for coredns [ef558c9a6de6] ...
	I0819 04:25:04.050543    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef558c9a6de6"
	I0819 04:25:04.062281    4093 logs.go:123] Gathering logs for coredns [20ce9e060b4f] ...
	I0819 04:25:04.062292    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20ce9e060b4f"
	I0819 04:25:04.073923    4093 logs.go:123] Gathering logs for kube-controller-manager [ee8bf9db190f] ...
	I0819 04:25:04.073933    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee8bf9db190f"
	I0819 04:25:04.094725    4093 logs.go:123] Gathering logs for coredns [196b61ee06a4] ...
	I0819 04:25:04.094735    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196b61ee06a4"
	I0819 04:25:04.106466    4093 logs.go:123] Gathering logs for kube-scheduler [5d8eef1a2bec] ...
	I0819 04:25:04.106476    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d8eef1a2bec"
	I0819 04:25:04.121300    4093 logs.go:123] Gathering logs for container status ...
	I0819 04:25:04.121310    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:25:04.132991    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:25:04.133005    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:25:04.133032    4093 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0819 04:25:04.133036    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	  Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:25:04.133040    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	  Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:25:04.133043    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:25:04.133046    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:25:14.135073    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:25:19.137402    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:25:19.137662    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:25:19.156028    4093 logs.go:276] 1 containers: [47f9e56baf4e]
	I0819 04:25:19.156108    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:25:19.169981    4093 logs.go:276] 1 containers: [f2b22411f75b]
	I0819 04:25:19.170042    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:25:19.181540    4093 logs.go:276] 4 containers: [ef558c9a6de6 20ce9e060b4f 23bece56c888 196b61ee06a4]
	I0819 04:25:19.181618    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:25:19.192002    4093 logs.go:276] 1 containers: [5d8eef1a2bec]
	I0819 04:25:19.192073    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:25:19.202392    4093 logs.go:276] 1 containers: [8b8837f8e096]
	I0819 04:25:19.202452    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:25:19.213700    4093 logs.go:276] 1 containers: [ee8bf9db190f]
	I0819 04:25:19.213768    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:25:19.228325    4093 logs.go:276] 0 containers: []
	W0819 04:25:19.228336    4093 logs.go:278] No container was found matching "kindnet"
	I0819 04:25:19.228392    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:25:19.238857    4093 logs.go:276] 1 containers: [f3ca31526ce2]
	I0819 04:25:19.238876    4093 logs.go:123] Gathering logs for coredns [ef558c9a6de6] ...
	I0819 04:25:19.238882    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef558c9a6de6"
	I0819 04:25:19.251109    4093 logs.go:123] Gathering logs for coredns [196b61ee06a4] ...
	I0819 04:25:19.251121    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196b61ee06a4"
	I0819 04:25:19.263196    4093 logs.go:123] Gathering logs for kube-controller-manager [ee8bf9db190f] ...
	I0819 04:25:19.263209    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee8bf9db190f"
	I0819 04:25:19.280398    4093 logs.go:123] Gathering logs for storage-provisioner [f3ca31526ce2] ...
	I0819 04:25:19.280407    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca31526ce2"
	I0819 04:25:19.292028    4093 logs.go:123] Gathering logs for kubelet ...
	I0819 04:25:19.292040    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:25:19.308385    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:25:19.308477    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:25:19.324715    4093 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:25:19.324722    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:25:19.360044    4093 logs.go:123] Gathering logs for kube-proxy [8b8837f8e096] ...
	I0819 04:25:19.360054    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b8837f8e096"
	I0819 04:25:19.371726    4093 logs.go:123] Gathering logs for container status ...
	I0819 04:25:19.371736    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:25:19.383281    4093 logs.go:123] Gathering logs for coredns [23bece56c888] ...
	I0819 04:25:19.383294    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bece56c888"
	I0819 04:25:19.397538    4093 logs.go:123] Gathering logs for kube-scheduler [5d8eef1a2bec] ...
	I0819 04:25:19.397552    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d8eef1a2bec"
	I0819 04:25:19.420604    4093 logs.go:123] Gathering logs for kube-apiserver [47f9e56baf4e] ...
	I0819 04:25:19.420617    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47f9e56baf4e"
	I0819 04:25:19.434755    4093 logs.go:123] Gathering logs for coredns [20ce9e060b4f] ...
	I0819 04:25:19.434766    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20ce9e060b4f"
	I0819 04:25:19.446715    4093 logs.go:123] Gathering logs for Docker ...
	I0819 04:25:19.446727    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:25:19.471093    4093 logs.go:123] Gathering logs for dmesg ...
	I0819 04:25:19.471101    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:25:19.475218    4093 logs.go:123] Gathering logs for etcd [f2b22411f75b] ...
	I0819 04:25:19.475225    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2b22411f75b"
	I0819 04:25:19.489069    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:25:19.489080    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:25:19.489107    4093 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0819 04:25:19.489112    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	  Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:25:19.489123    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	  Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:25:19.489164    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:25:19.489167    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:25:29.493296    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:25:34.495800    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:25:34.495858    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:25:34.506768    4093 logs.go:276] 1 containers: [47f9e56baf4e]
	I0819 04:25:34.506833    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:25:34.523081    4093 logs.go:276] 1 containers: [f2b22411f75b]
	I0819 04:25:34.523150    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:25:34.535066    4093 logs.go:276] 4 containers: [ef558c9a6de6 20ce9e060b4f 23bece56c888 196b61ee06a4]
	I0819 04:25:34.535112    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:25:34.545761    4093 logs.go:276] 1 containers: [5d8eef1a2bec]
	I0819 04:25:34.545819    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:25:34.557646    4093 logs.go:276] 1 containers: [8b8837f8e096]
	I0819 04:25:34.557706    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:25:34.571652    4093 logs.go:276] 1 containers: [ee8bf9db190f]
	I0819 04:25:34.571705    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:25:34.582636    4093 logs.go:276] 0 containers: []
	W0819 04:25:34.582644    4093 logs.go:278] No container was found matching "kindnet"
	I0819 04:25:34.582688    4093 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:25:34.593881    4093 logs.go:276] 1 containers: [f3ca31526ce2]
	I0819 04:25:34.593897    4093 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:25:34.593903    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:25:34.630512    4093 logs.go:123] Gathering logs for coredns [20ce9e060b4f] ...
	I0819 04:25:34.630520    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20ce9e060b4f"
	I0819 04:25:34.642791    4093 logs.go:123] Gathering logs for kube-proxy [8b8837f8e096] ...
	I0819 04:25:34.642801    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b8837f8e096"
	I0819 04:25:34.656282    4093 logs.go:123] Gathering logs for Docker ...
	I0819 04:25:34.656295    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:25:34.680905    4093 logs.go:123] Gathering logs for container status ...
	I0819 04:25:34.680918    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:25:34.693614    4093 logs.go:123] Gathering logs for dmesg ...
	I0819 04:25:34.693625    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:25:34.698846    4093 logs.go:123] Gathering logs for kube-apiserver [47f9e56baf4e] ...
	I0819 04:25:34.698860    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47f9e56baf4e"
	I0819 04:25:34.715097    4093 logs.go:123] Gathering logs for coredns [23bece56c888] ...
	I0819 04:25:34.715109    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bece56c888"
	I0819 04:25:34.728446    4093 logs.go:123] Gathering logs for coredns [196b61ee06a4] ...
	I0819 04:25:34.728456    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196b61ee06a4"
	I0819 04:25:34.742269    4093 logs.go:123] Gathering logs for kube-scheduler [5d8eef1a2bec] ...
	I0819 04:25:34.742279    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d8eef1a2bec"
	I0819 04:25:34.758648    4093 logs.go:123] Gathering logs for kubelet ...
	I0819 04:25:34.758658    4093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 04:25:34.776378    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:25:34.776471    4093 logs.go:138] Found kubelet problem: Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:25:34.793305    4093 logs.go:123] Gathering logs for storage-provisioner [f3ca31526ce2] ...
	I0819 04:25:34.793314    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca31526ce2"
	I0819 04:25:34.805798    4093 logs.go:123] Gathering logs for coredns [ef558c9a6de6] ...
	I0819 04:25:34.805808    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef558c9a6de6"
	I0819 04:25:34.819096    4093 logs.go:123] Gathering logs for kube-controller-manager [ee8bf9db190f] ...
	I0819 04:25:34.819110    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee8bf9db190f"
	I0819 04:25:34.840658    4093 logs.go:123] Gathering logs for etcd [f2b22411f75b] ...
	I0819 04:25:34.840670    4093 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2b22411f75b"
	I0819 04:25:34.855116    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:25:34.855124    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 04:25:34.855145    4093 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0819 04:25:34.855149    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	  Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: W0819 11:17:50.677069    1639 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	W0819 04:25:34.855188    4093 out.go:270]   Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	  Aug 19 11:17:50 stopped-upgrade-446000 kubelet[1639]: E0819 11:17:50.677083    1639 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-446000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-446000' and this object
	I0819 04:25:34.855194    4093 out.go:358] Setting ErrFile to fd 2...
	I0819 04:25:34.855197    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:25:44.857316    4093 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:25:49.859568    4093 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:25:49.865638    4093 out.go:201] 
	W0819 04:25:49.869595    4093 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0819 04:25:49.869625    4093 out.go:270] * 
	* 
	W0819 04:25:49.871330    4093 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:25:49.883598    4093 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-446000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (588.51s)

                                                
                                    
x
+
TestPause/serial/Start (9.96s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-260000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-260000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.912471667s)

                                                
                                                
-- stdout --
	* [pause-260000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-260000" primary control-plane node in "pause-260000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-260000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-260000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-260000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-260000 -n pause-260000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-260000 -n pause-260000: exit status 7 (50.270541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-260000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-182000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-182000 --driver=qemu2 : exit status 80 (9.861660875s)

                                                
                                                
-- stdout --
	* [NoKubernetes-182000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-182000" primary control-plane node in "NoKubernetes-182000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-182000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-182000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-182000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-182000 -n NoKubernetes-182000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-182000 -n NoKubernetes-182000: exit status 7 (66.278459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-182000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-182000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-182000 --no-kubernetes --driver=qemu2 : exit status 80 (5.270782667s)

                                                
                                                
-- stdout --
	* [NoKubernetes-182000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-182000
	* Restarting existing qemu2 VM for "NoKubernetes-182000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-182000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-182000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-182000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-182000 -n NoKubernetes-182000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-182000 -n NoKubernetes-182000: exit status 7 (61.224791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-182000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-182000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-182000 --no-kubernetes --driver=qemu2 : exit status 80 (5.255429459s)

                                                
                                                
-- stdout --
	* [NoKubernetes-182000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-182000
	* Restarting existing qemu2 VM for "NoKubernetes-182000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-182000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-182000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-182000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-182000 -n NoKubernetes-182000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-182000 -n NoKubernetes-182000: exit status 7 (69.160042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-182000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-182000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-182000 --driver=qemu2 : exit status 80 (5.28094775s)

                                                
                                                
-- stdout --
	* [NoKubernetes-182000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-182000
	* Restarting existing qemu2 VM for "NoKubernetes-182000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-182000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-182000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-182000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-182000 -n NoKubernetes-182000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-182000 -n NoKubernetes-182000: exit status 7 (55.08925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-182000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-745000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-745000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.93031225s)

                                                
                                                
-- stdout --
	* [auto-745000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-745000" primary control-plane node in "auto-745000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-745000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:23:58.108344    4317 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:23:58.108489    4317 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:23:58.108493    4317 out.go:358] Setting ErrFile to fd 2...
	I0819 04:23:58.108495    4317 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:23:58.108631    4317 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:23:58.109646    4317 out.go:352] Setting JSON to false
	I0819 04:23:58.125722    4317 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3201,"bootTime":1724063437,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0819 04:23:58.125790    4317 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:23:58.132562    4317 out.go:177] * [auto-745000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:23:58.139587    4317 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 04:23:58.139673    4317 notify.go:220] Checking for updates...
	I0819 04:23:58.145518    4317 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	I0819 04:23:58.148460    4317 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:23:58.151562    4317 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:23:58.154573    4317 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	I0819 04:23:58.157561    4317 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:23:58.160827    4317 config.go:182] Loaded profile config "multinode-837000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:23:58.160894    4317 config.go:182] Loaded profile config "stopped-upgrade-446000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 04:23:58.160939    4317 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:23:58.164534    4317 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 04:23:58.171481    4317 start.go:297] selected driver: qemu2
	I0819 04:23:58.171493    4317 start.go:901] validating driver "qemu2" against <nil>
	I0819 04:23:58.171500    4317 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:23:58.174110    4317 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 04:23:58.177532    4317 out.go:177] * Automatically selected the socket_vmnet network
	I0819 04:23:58.180651    4317 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 04:23:58.180702    4317 cni.go:84] Creating CNI manager for ""
	I0819 04:23:58.180712    4317 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:23:58.180717    4317 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 04:23:58.180760    4317 start.go:340] cluster config:
	{Name:auto-745000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:auto-745000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:23:58.185350    4317 iso.go:125] acquiring lock: {Name:mk9bbf20f477d4c64990a7e4e7281f35cf7cfcc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:23:58.192524    4317 out.go:177] * Starting "auto-745000" primary control-plane node in "auto-745000" cluster
	I0819 04:23:58.196420    4317 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:23:58.196449    4317 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:23:58.196456    4317 cache.go:56] Caching tarball of preloaded images
	I0819 04:23:58.196538    4317 preload.go:172] Found /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:23:58.196545    4317 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:23:58.196605    4317 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/auto-745000/config.json ...
	I0819 04:23:58.196616    4317 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/auto-745000/config.json: {Name:mk38496e92a7e7b3b7520029423a387fda8953f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:23:58.196904    4317 start.go:360] acquireMachinesLock for auto-745000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:23:58.196935    4317 start.go:364] duration metric: took 25.417µs to acquireMachinesLock for "auto-745000"
	I0819 04:23:58.196947    4317 start.go:93] Provisioning new machine with config: &{Name:auto-745000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:auto-745000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:23:58.196977    4317 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:23:58.201566    4317 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 04:23:58.216804    4317 start.go:159] libmachine.API.Create for "auto-745000" (driver="qemu2")
	I0819 04:23:58.216829    4317 client.go:168] LocalClient.Create starting
	I0819 04:23:58.216888    4317 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem
	I0819 04:23:58.216918    4317 main.go:141] libmachine: Decoding PEM data...
	I0819 04:23:58.216927    4317 main.go:141] libmachine: Parsing certificate...
	I0819 04:23:58.216962    4317 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/cert.pem
	I0819 04:23:58.216992    4317 main.go:141] libmachine: Decoding PEM data...
	I0819 04:23:58.217001    4317 main.go:141] libmachine: Parsing certificate...
	I0819 04:23:58.217469    4317 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19476-967/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19476-967/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:23:58.370191    4317 main.go:141] libmachine: Creating SSH key...
	I0819 04:23:58.536853    4317 main.go:141] libmachine: Creating Disk image...
	I0819 04:23:58.536863    4317 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:23:58.537275    4317 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19476-967/.minikube/machines/auto-745000/disk.qcow2.raw /Users/jenkins/minikube-integration/19476-967/.minikube/machines/auto-745000/disk.qcow2
	I0819 04:23:58.546772    4317 main.go:141] libmachine: STDOUT: 
	I0819 04:23:58.546794    4317 main.go:141] libmachine: STDERR: 
	I0819 04:23:58.546844    4317 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/auto-745000/disk.qcow2 +20000M
	I0819 04:23:58.554765    4317 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:23:58.554791    4317 main.go:141] libmachine: STDERR: 
	I0819 04:23:58.554814    4317 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19476-967/.minikube/machines/auto-745000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19476-967/.minikube/machines/auto-745000/disk.qcow2
	I0819 04:23:58.554820    4317 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:23:58.554831    4317 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:23:58.554855    4317 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/auto-745000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/auto-745000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/auto-745000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:57:31:a8:56:a8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/auto-745000/disk.qcow2
	I0819 04:23:58.556514    4317 main.go:141] libmachine: STDOUT: 
	I0819 04:23:58.556529    4317 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:23:58.556546    4317 client.go:171] duration metric: took 339.715084ms to LocalClient.Create
	I0819 04:24:00.558624    4317 start.go:128] duration metric: took 2.36166325s to createHost
	I0819 04:24:00.558667    4317 start.go:83] releasing machines lock for "auto-745000", held for 2.361755625s
	W0819 04:24:00.558692    4317 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:24:00.568857    4317 out.go:177] * Deleting "auto-745000" in qemu2 ...
	W0819 04:24:00.594359    4317 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:24:00.594374    4317 start.go:729] Will try again in 5 seconds ...
	I0819 04:24:05.596498    4317 start.go:360] acquireMachinesLock for auto-745000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:24:05.597031    4317 start.go:364] duration metric: took 445.083µs to acquireMachinesLock for "auto-745000"
	I0819 04:24:05.597214    4317 start.go:93] Provisioning new machine with config: &{Name:auto-745000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:auto-745000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:24:05.597542    4317 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:24:05.607241    4317 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 04:24:05.659590    4317 start.go:159] libmachine.API.Create for "auto-745000" (driver="qemu2")
	I0819 04:24:05.659644    4317 client.go:168] LocalClient.Create starting
	I0819 04:24:05.659763    4317 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem
	I0819 04:24:05.659835    4317 main.go:141] libmachine: Decoding PEM data...
	I0819 04:24:05.659855    4317 main.go:141] libmachine: Parsing certificate...
	I0819 04:24:05.659918    4317 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/cert.pem
	I0819 04:24:05.659963    4317 main.go:141] libmachine: Decoding PEM data...
	I0819 04:24:05.659976    4317 main.go:141] libmachine: Parsing certificate...
	I0819 04:24:05.660492    4317 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19476-967/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19476-967/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:24:05.819919    4317 main.go:141] libmachine: Creating SSH key...
	I0819 04:24:05.950469    4317 main.go:141] libmachine: Creating Disk image...
	I0819 04:24:05.950477    4317 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:24:05.950695    4317 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19476-967/.minikube/machines/auto-745000/disk.qcow2.raw /Users/jenkins/minikube-integration/19476-967/.minikube/machines/auto-745000/disk.qcow2
	I0819 04:24:05.960175    4317 main.go:141] libmachine: STDOUT: 
	I0819 04:24:05.960195    4317 main.go:141] libmachine: STDERR: 
	I0819 04:24:05.960259    4317 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/auto-745000/disk.qcow2 +20000M
	I0819 04:24:05.968278    4317 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:24:05.968293    4317 main.go:141] libmachine: STDERR: 
	I0819 04:24:05.968314    4317 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19476-967/.minikube/machines/auto-745000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19476-967/.minikube/machines/auto-745000/disk.qcow2
	I0819 04:24:05.968318    4317 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:24:05.968325    4317 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:24:05.968351    4317 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/auto-745000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/auto-745000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/auto-745000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:f9:76:75:70:9d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/auto-745000/disk.qcow2
	I0819 04:24:05.969965    4317 main.go:141] libmachine: STDOUT: 
	I0819 04:24:05.969981    4317 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:24:05.969993    4317 client.go:171] duration metric: took 310.346833ms to LocalClient.Create
	I0819 04:24:07.972174    4317 start.go:128] duration metric: took 2.374619417s to createHost
	I0819 04:24:07.972327    4317 start.go:83] releasing machines lock for "auto-745000", held for 2.375299166s
	W0819 04:24:07.972807    4317 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-745000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-745000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:24:07.981223    4317 out.go:201] 
	W0819 04:24:07.988458    4317 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:24:07.988486    4317 out.go:270] * 
	* 
	W0819 04:24:07.990971    4317 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:24:08.000386    4317 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-745000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-745000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.800763958s)

                                                
                                                
-- stdout --
	* [flannel-745000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-745000" primary control-plane node in "flannel-745000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-745000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:24:10.156552    4426 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:24:10.156670    4426 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:24:10.156673    4426 out.go:358] Setting ErrFile to fd 2...
	I0819 04:24:10.156675    4426 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:24:10.156814    4426 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:24:10.157860    4426 out.go:352] Setting JSON to false
	I0819 04:24:10.174062    4426 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3213,"bootTime":1724063437,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0819 04:24:10.174165    4426 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:24:10.180124    4426 out.go:177] * [flannel-745000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:24:10.188011    4426 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 04:24:10.188047    4426 notify.go:220] Checking for updates...
	I0819 04:24:10.193974    4426 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	I0819 04:24:10.196995    4426 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:24:10.200015    4426 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:24:10.203036    4426 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	I0819 04:24:10.205995    4426 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:24:10.209470    4426 config.go:182] Loaded profile config "multinode-837000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:24:10.209539    4426 config.go:182] Loaded profile config "stopped-upgrade-446000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 04:24:10.209586    4426 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:24:10.212905    4426 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 04:24:10.222017    4426 start.go:297] selected driver: qemu2
	I0819 04:24:10.222025    4426 start.go:901] validating driver "qemu2" against <nil>
	I0819 04:24:10.222030    4426 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:24:10.224324    4426 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 04:24:10.226889    4426 out.go:177] * Automatically selected the socket_vmnet network
	I0819 04:24:10.230081    4426 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 04:24:10.230114    4426 cni.go:84] Creating CNI manager for "flannel"
	I0819 04:24:10.230117    4426 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0819 04:24:10.230154    4426 start.go:340] cluster config:
	{Name:flannel-745000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:flannel-745000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:24:10.233568    4426 iso.go:125] acquiring lock: {Name:mk9bbf20f477d4c64990a7e4e7281f35cf7cfcc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:24:10.241945    4426 out.go:177] * Starting "flannel-745000" primary control-plane node in "flannel-745000" cluster
	I0819 04:24:10.245970    4426 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:24:10.245983    4426 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:24:10.245989    4426 cache.go:56] Caching tarball of preloaded images
	I0819 04:24:10.246036    4426 preload.go:172] Found /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:24:10.246040    4426 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:24:10.246096    4426 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/flannel-745000/config.json ...
	I0819 04:24:10.246106    4426 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/flannel-745000/config.json: {Name:mkc3463a3b2cf3b3bddc9b1a945bf83fa081eb32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:24:10.246493    4426 start.go:360] acquireMachinesLock for flannel-745000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:24:10.246524    4426 start.go:364] duration metric: took 23.917µs to acquireMachinesLock for "flannel-745000"
	I0819 04:24:10.246535    4426 start.go:93] Provisioning new machine with config: &{Name:flannel-745000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:flannel-745000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:24:10.246567    4426 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:24:10.249987    4426 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 04:24:10.265210    4426 start.go:159] libmachine.API.Create for "flannel-745000" (driver="qemu2")
	I0819 04:24:10.265234    4426 client.go:168] LocalClient.Create starting
	I0819 04:24:10.265290    4426 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem
	I0819 04:24:10.265319    4426 main.go:141] libmachine: Decoding PEM data...
	I0819 04:24:10.265328    4426 main.go:141] libmachine: Parsing certificate...
	I0819 04:24:10.265368    4426 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/cert.pem
	I0819 04:24:10.265390    4426 main.go:141] libmachine: Decoding PEM data...
	I0819 04:24:10.265395    4426 main.go:141] libmachine: Parsing certificate...
	I0819 04:24:10.265727    4426 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19476-967/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19476-967/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:24:10.415771    4426 main.go:141] libmachine: Creating SSH key...
	I0819 04:24:10.598374    4426 main.go:141] libmachine: Creating Disk image...
	I0819 04:24:10.598387    4426 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:24:10.598588    4426 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19476-967/.minikube/machines/flannel-745000/disk.qcow2.raw /Users/jenkins/minikube-integration/19476-967/.minikube/machines/flannel-745000/disk.qcow2
	I0819 04:24:10.608252    4426 main.go:141] libmachine: STDOUT: 
	I0819 04:24:10.608278    4426 main.go:141] libmachine: STDERR: 
	I0819 04:24:10.608328    4426 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/flannel-745000/disk.qcow2 +20000M
	I0819 04:24:10.616552    4426 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:24:10.616575    4426 main.go:141] libmachine: STDERR: 
	I0819 04:24:10.616588    4426 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19476-967/.minikube/machines/flannel-745000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19476-967/.minikube/machines/flannel-745000/disk.qcow2
	I0819 04:24:10.616594    4426 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:24:10.616609    4426 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:24:10.616638    4426 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/flannel-745000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/flannel-745000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/flannel-745000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:fb:e5:86:63:21 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/flannel-745000/disk.qcow2
	I0819 04:24:10.618393    4426 main.go:141] libmachine: STDOUT: 
	I0819 04:24:10.618409    4426 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:24:10.618429    4426 client.go:171] duration metric: took 353.195167ms to LocalClient.Create
	I0819 04:24:12.620587    4426 start.go:128] duration metric: took 2.374027625s to createHost
	I0819 04:24:12.620763    4426 start.go:83] releasing machines lock for "flannel-745000", held for 2.374145542s
	W0819 04:24:12.620833    4426 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:24:12.627757    4426 out.go:177] * Deleting "flannel-745000" in qemu2 ...
	W0819 04:24:12.658657    4426 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:24:12.658691    4426 start.go:729] Will try again in 5 seconds ...
	I0819 04:24:17.660716    4426 start.go:360] acquireMachinesLock for flannel-745000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:24:17.660856    4426 start.go:364] duration metric: took 120.584µs to acquireMachinesLock for "flannel-745000"
	I0819 04:24:17.660881    4426 start.go:93] Provisioning new machine with config: &{Name:flannel-745000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:flannel-745000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:24:17.660925    4426 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:24:17.672114    4426 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 04:24:17.689668    4426 start.go:159] libmachine.API.Create for "flannel-745000" (driver="qemu2")
	I0819 04:24:17.689701    4426 client.go:168] LocalClient.Create starting
	I0819 04:24:17.689773    4426 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem
	I0819 04:24:17.689813    4426 main.go:141] libmachine: Decoding PEM data...
	I0819 04:24:17.689823    4426 main.go:141] libmachine: Parsing certificate...
	I0819 04:24:17.689862    4426 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/cert.pem
	I0819 04:24:17.689886    4426 main.go:141] libmachine: Decoding PEM data...
	I0819 04:24:17.689891    4426 main.go:141] libmachine: Parsing certificate...
	I0819 04:24:17.690229    4426 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19476-967/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19476-967/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:24:17.840819    4426 main.go:141] libmachine: Creating SSH key...
	I0819 04:24:17.868479    4426 main.go:141] libmachine: Creating Disk image...
	I0819 04:24:17.868489    4426 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:24:17.868726    4426 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19476-967/.minikube/machines/flannel-745000/disk.qcow2.raw /Users/jenkins/minikube-integration/19476-967/.minikube/machines/flannel-745000/disk.qcow2
	I0819 04:24:17.879027    4426 main.go:141] libmachine: STDOUT: 
	I0819 04:24:17.879052    4426 main.go:141] libmachine: STDERR: 
	I0819 04:24:17.879122    4426 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/flannel-745000/disk.qcow2 +20000M
	I0819 04:24:17.888521    4426 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:24:17.888547    4426 main.go:141] libmachine: STDERR: 
	I0819 04:24:17.888560    4426 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19476-967/.minikube/machines/flannel-745000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19476-967/.minikube/machines/flannel-745000/disk.qcow2
	I0819 04:24:17.888564    4426 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:24:17.888574    4426 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:24:17.888610    4426 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/flannel-745000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/flannel-745000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/flannel-745000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:21:51:bc:13:96 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/flannel-745000/disk.qcow2
	I0819 04:24:17.890617    4426 main.go:141] libmachine: STDOUT: 
	I0819 04:24:17.890637    4426 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:24:17.890651    4426 client.go:171] duration metric: took 200.94775ms to LocalClient.Create
	I0819 04:24:19.892821    4426 start.go:128] duration metric: took 2.231899083s to createHost
	I0819 04:24:19.892882    4426 start.go:83] releasing machines lock for "flannel-745000", held for 2.232044583s
	W0819 04:24:19.893192    4426 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-745000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-745000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:24:19.901694    4426 out.go:201] 
	W0819 04:24:19.904786    4426 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:24:19.904799    4426 out.go:270] * 
	* 
	W0819 04:24:19.906609    4426 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:24:19.915676    4426 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-745000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-745000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.861331s)

                                                
                                                
-- stdout --
	* [enable-default-cni-745000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-745000" primary control-plane node in "enable-default-cni-745000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-745000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:24:22.241678    4545 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:24:22.241826    4545 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:24:22.241829    4545 out.go:358] Setting ErrFile to fd 2...
	I0819 04:24:22.241831    4545 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:24:22.241958    4545 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:24:22.243021    4545 out.go:352] Setting JSON to false
	I0819 04:24:22.260039    4545 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3225,"bootTime":1724063437,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0819 04:24:22.260117    4545 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:24:22.266324    4545 out.go:177] * [enable-default-cni-745000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:24:22.274179    4545 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 04:24:22.274233    4545 notify.go:220] Checking for updates...
	I0819 04:24:22.281236    4545 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	I0819 04:24:22.284188    4545 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:24:22.287202    4545 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:24:22.290110    4545 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	I0819 04:24:22.293226    4545 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:24:22.296550    4545 config.go:182] Loaded profile config "multinode-837000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:24:22.296616    4545 config.go:182] Loaded profile config "stopped-upgrade-446000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 04:24:22.296667    4545 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:24:22.300196    4545 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 04:24:22.307180    4545 start.go:297] selected driver: qemu2
	I0819 04:24:22.307186    4545 start.go:901] validating driver "qemu2" against <nil>
	I0819 04:24:22.307193    4545 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:24:22.309274    4545 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 04:24:22.310959    4545 out.go:177] * Automatically selected the socket_vmnet network
	E0819 04:24:22.314265    4545 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0819 04:24:22.314278    4545 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 04:24:22.314295    4545 cni.go:84] Creating CNI manager for "bridge"
	I0819 04:24:22.314299    4545 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 04:24:22.314327    4545 start.go:340] cluster config:
	{Name:enable-default-cni-745000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-745000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:24:22.317954    4545 iso.go:125] acquiring lock: {Name:mk9bbf20f477d4c64990a7e4e7281f35cf7cfcc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:24:22.325183    4545 out.go:177] * Starting "enable-default-cni-745000" primary control-plane node in "enable-default-cni-745000" cluster
	I0819 04:24:22.329209    4545 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:24:22.329224    4545 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:24:22.329234    4545 cache.go:56] Caching tarball of preloaded images
	I0819 04:24:22.329287    4545 preload.go:172] Found /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:24:22.329293    4545 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:24:22.329353    4545 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/enable-default-cni-745000/config.json ...
	I0819 04:24:22.329364    4545 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/enable-default-cni-745000/config.json: {Name:mk89f65479273165c409aba81a08429cbf262c13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:24:22.329579    4545 start.go:360] acquireMachinesLock for enable-default-cni-745000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:24:22.329613    4545 start.go:364] duration metric: took 28.416µs to acquireMachinesLock for "enable-default-cni-745000"
	I0819 04:24:22.329627    4545 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-745000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-745000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:24:22.329660    4545 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:24:22.338231    4545 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 04:24:22.354519    4545 start.go:159] libmachine.API.Create for "enable-default-cni-745000" (driver="qemu2")
	I0819 04:24:22.354545    4545 client.go:168] LocalClient.Create starting
	I0819 04:24:22.354617    4545 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem
	I0819 04:24:22.354651    4545 main.go:141] libmachine: Decoding PEM data...
	I0819 04:24:22.354660    4545 main.go:141] libmachine: Parsing certificate...
	I0819 04:24:22.354698    4545 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/cert.pem
	I0819 04:24:22.354721    4545 main.go:141] libmachine: Decoding PEM data...
	I0819 04:24:22.354729    4545 main.go:141] libmachine: Parsing certificate...
	I0819 04:24:22.355095    4545 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19476-967/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19476-967/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:24:22.505420    4545 main.go:141] libmachine: Creating SSH key...
	I0819 04:24:22.604653    4545 main.go:141] libmachine: Creating Disk image...
	I0819 04:24:22.604660    4545 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:24:22.604849    4545 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19476-967/.minikube/machines/enable-default-cni-745000/disk.qcow2.raw /Users/jenkins/minikube-integration/19476-967/.minikube/machines/enable-default-cni-745000/disk.qcow2
	I0819 04:24:22.614108    4545 main.go:141] libmachine: STDOUT: 
	I0819 04:24:22.614127    4545 main.go:141] libmachine: STDERR: 
	I0819 04:24:22.614173    4545 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/enable-default-cni-745000/disk.qcow2 +20000M
	I0819 04:24:22.622208    4545 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:24:22.622224    4545 main.go:141] libmachine: STDERR: 
	I0819 04:24:22.622241    4545 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19476-967/.minikube/machines/enable-default-cni-745000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19476-967/.minikube/machines/enable-default-cni-745000/disk.qcow2
	I0819 04:24:22.622246    4545 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:24:22.622259    4545 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:24:22.622291    4545 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/enable-default-cni-745000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/enable-default-cni-745000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/enable-default-cni-745000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:fb:ea:af:69:bf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/enable-default-cni-745000/disk.qcow2
	I0819 04:24:22.623905    4545 main.go:141] libmachine: STDOUT: 
	I0819 04:24:22.623920    4545 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:24:22.623937    4545 client.go:171] duration metric: took 269.391ms to LocalClient.Create
	I0819 04:24:24.626003    4545 start.go:128] duration metric: took 2.296356333s to createHost
	I0819 04:24:24.626025    4545 start.go:83] releasing machines lock for "enable-default-cni-745000", held for 2.296435541s
	W0819 04:24:24.626039    4545 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:24:24.635586    4545 out.go:177] * Deleting "enable-default-cni-745000" in qemu2 ...
	W0819 04:24:24.649728    4545 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:24:24.649734    4545 start.go:729] Will try again in 5 seconds ...
	I0819 04:24:29.651859    4545 start.go:360] acquireMachinesLock for enable-default-cni-745000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:24:29.652583    4545 start.go:364] duration metric: took 618.166µs to acquireMachinesLock for "enable-default-cni-745000"
	I0819 04:24:29.652735    4545 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-745000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-745000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:24:29.652957    4545 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:24:29.664618    4545 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 04:24:29.715395    4545 start.go:159] libmachine.API.Create for "enable-default-cni-745000" (driver="qemu2")
	I0819 04:24:29.715445    4545 client.go:168] LocalClient.Create starting
	I0819 04:24:29.715562    4545 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem
	I0819 04:24:29.715629    4545 main.go:141] libmachine: Decoding PEM data...
	I0819 04:24:29.715646    4545 main.go:141] libmachine: Parsing certificate...
	I0819 04:24:29.715719    4545 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/cert.pem
	I0819 04:24:29.715764    4545 main.go:141] libmachine: Decoding PEM data...
	I0819 04:24:29.715774    4545 main.go:141] libmachine: Parsing certificate...
	I0819 04:24:29.716326    4545 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19476-967/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19476-967/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:24:29.880117    4545 main.go:141] libmachine: Creating SSH key...
	I0819 04:24:30.006323    4545 main.go:141] libmachine: Creating Disk image...
	I0819 04:24:30.006332    4545 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:24:30.006526    4545 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19476-967/.minikube/machines/enable-default-cni-745000/disk.qcow2.raw /Users/jenkins/minikube-integration/19476-967/.minikube/machines/enable-default-cni-745000/disk.qcow2
	I0819 04:24:30.016551    4545 main.go:141] libmachine: STDOUT: 
	I0819 04:24:30.016568    4545 main.go:141] libmachine: STDERR: 
	I0819 04:24:30.016617    4545 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/enable-default-cni-745000/disk.qcow2 +20000M
	I0819 04:24:30.024836    4545 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:24:30.024851    4545 main.go:141] libmachine: STDERR: 
	I0819 04:24:30.024860    4545 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19476-967/.minikube/machines/enable-default-cni-745000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19476-967/.minikube/machines/enable-default-cni-745000/disk.qcow2
	I0819 04:24:30.024865    4545 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:24:30.024879    4545 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:24:30.024917    4545 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/enable-default-cni-745000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/enable-default-cni-745000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/enable-default-cni-745000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:7d:d6:29:f0:7d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/enable-default-cni-745000/disk.qcow2
	I0819 04:24:30.026533    4545 main.go:141] libmachine: STDOUT: 
	I0819 04:24:30.026548    4545 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:24:30.026560    4545 client.go:171] duration metric: took 311.111667ms to LocalClient.Create
	I0819 04:24:32.028748    4545 start.go:128] duration metric: took 2.375771083s to createHost
	I0819 04:24:32.028822    4545 start.go:83] releasing machines lock for "enable-default-cni-745000", held for 2.376242875s
	W0819 04:24:32.029301    4545 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-745000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-745000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:24:32.038934    4545 out.go:201] 
	W0819 04:24:32.047043    4545 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:24:32.047110    4545 out.go:270] * 
	* 
	W0819 04:24:32.049811    4545 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:24:32.059917    4545 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-745000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-745000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.851699666s)

                                                
                                                
-- stdout --
	* [bridge-745000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-745000" primary control-plane node in "bridge-745000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-745000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:24:34.306735    4655 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:24:34.306878    4655 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:24:34.306884    4655 out.go:358] Setting ErrFile to fd 2...
	I0819 04:24:34.306886    4655 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:24:34.307016    4655 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:24:34.308078    4655 out.go:352] Setting JSON to false
	I0819 04:24:34.324229    4655 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3237,"bootTime":1724063437,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0819 04:24:34.324299    4655 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:24:34.329892    4655 out.go:177] * [bridge-745000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:24:34.336799    4655 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 04:24:34.336835    4655 notify.go:220] Checking for updates...
	I0819 04:24:34.342833    4655 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	I0819 04:24:34.345789    4655 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:24:34.348816    4655 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:24:34.351707    4655 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	I0819 04:24:34.354779    4655 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:24:34.358088    4655 config.go:182] Loaded profile config "multinode-837000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:24:34.358149    4655 config.go:182] Loaded profile config "stopped-upgrade-446000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 04:24:34.358197    4655 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:24:34.361810    4655 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 04:24:34.368770    4655 start.go:297] selected driver: qemu2
	I0819 04:24:34.368776    4655 start.go:901] validating driver "qemu2" against <nil>
	I0819 04:24:34.368781    4655 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:24:34.370788    4655 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 04:24:34.372452    4655 out.go:177] * Automatically selected the socket_vmnet network
	I0819 04:24:34.375937    4655 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 04:24:34.375962    4655 cni.go:84] Creating CNI manager for "bridge"
	I0819 04:24:34.375979    4655 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 04:24:34.376036    4655 start.go:340] cluster config:
	{Name:bridge-745000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:bridge-745000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:24:34.379273    4655 iso.go:125] acquiring lock: {Name:mk9bbf20f477d4c64990a7e4e7281f35cf7cfcc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:24:34.386826    4655 out.go:177] * Starting "bridge-745000" primary control-plane node in "bridge-745000" cluster
	I0819 04:24:34.390787    4655 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:24:34.390800    4655 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:24:34.390807    4655 cache.go:56] Caching tarball of preloaded images
	I0819 04:24:34.390860    4655 preload.go:172] Found /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:24:34.390864    4655 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:24:34.390917    4655 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/bridge-745000/config.json ...
	I0819 04:24:34.390927    4655 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/bridge-745000/config.json: {Name:mk59264acc4eac6604bc3bbe4e3f53191085f97b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:24:34.391326    4655 start.go:360] acquireMachinesLock for bridge-745000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:24:34.391355    4655 start.go:364] duration metric: took 24.5µs to acquireMachinesLock for "bridge-745000"
	I0819 04:24:34.391371    4655 start.go:93] Provisioning new machine with config: &{Name:bridge-745000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:bridge-745000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:24:34.391398    4655 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:24:34.395797    4655 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 04:24:34.410630    4655 start.go:159] libmachine.API.Create for "bridge-745000" (driver="qemu2")
	I0819 04:24:34.410655    4655 client.go:168] LocalClient.Create starting
	I0819 04:24:34.410718    4655 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem
	I0819 04:24:34.410754    4655 main.go:141] libmachine: Decoding PEM data...
	I0819 04:24:34.410763    4655 main.go:141] libmachine: Parsing certificate...
	I0819 04:24:34.410814    4655 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/cert.pem
	I0819 04:24:34.410837    4655 main.go:141] libmachine: Decoding PEM data...
	I0819 04:24:34.410845    4655 main.go:141] libmachine: Parsing certificate...
	I0819 04:24:34.411334    4655 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19476-967/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19476-967/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:24:34.562330    4655 main.go:141] libmachine: Creating SSH key...
	I0819 04:24:34.722784    4655 main.go:141] libmachine: Creating Disk image...
	I0819 04:24:34.722797    4655 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:24:34.723019    4655 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19476-967/.minikube/machines/bridge-745000/disk.qcow2.raw /Users/jenkins/minikube-integration/19476-967/.minikube/machines/bridge-745000/disk.qcow2
	I0819 04:24:34.732912    4655 main.go:141] libmachine: STDOUT: 
	I0819 04:24:34.732930    4655 main.go:141] libmachine: STDERR: 
	I0819 04:24:34.732988    4655 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/bridge-745000/disk.qcow2 +20000M
	I0819 04:24:34.741318    4655 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:24:34.741343    4655 main.go:141] libmachine: STDERR: 
	I0819 04:24:34.741365    4655 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19476-967/.minikube/machines/bridge-745000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19476-967/.minikube/machines/bridge-745000/disk.qcow2
	I0819 04:24:34.741370    4655 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:24:34.741377    4655 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:24:34.741405    4655 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/bridge-745000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/bridge-745000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/bridge-745000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:27:d2:18:40:34 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/bridge-745000/disk.qcow2
	I0819 04:24:34.743082    4655 main.go:141] libmachine: STDOUT: 
	I0819 04:24:34.743097    4655 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:24:34.743113    4655 client.go:171] duration metric: took 332.456958ms to LocalClient.Create
	I0819 04:24:36.744979    4655 start.go:128] duration metric: took 2.35359225s to createHost
	I0819 04:24:36.745029    4655 start.go:83] releasing machines lock for "bridge-745000", held for 2.353696166s
	W0819 04:24:36.745090    4655 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:24:36.764439    4655 out.go:177] * Deleting "bridge-745000" in qemu2 ...
	W0819 04:24:36.785449    4655 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:24:36.785463    4655 start.go:729] Will try again in 5 seconds ...
	I0819 04:24:41.787622    4655 start.go:360] acquireMachinesLock for bridge-745000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:24:41.788281    4655 start.go:364] duration metric: took 518.625µs to acquireMachinesLock for "bridge-745000"
	I0819 04:24:41.788467    4655 start.go:93] Provisioning new machine with config: &{Name:bridge-745000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:bridge-745000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:24:41.788760    4655 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:24:41.793613    4655 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 04:24:41.844642    4655 start.go:159] libmachine.API.Create for "bridge-745000" (driver="qemu2")
	I0819 04:24:41.844694    4655 client.go:168] LocalClient.Create starting
	I0819 04:24:41.844865    4655 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem
	I0819 04:24:41.844944    4655 main.go:141] libmachine: Decoding PEM data...
	I0819 04:24:41.844967    4655 main.go:141] libmachine: Parsing certificate...
	I0819 04:24:41.845027    4655 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/cert.pem
	I0819 04:24:41.845072    4655 main.go:141] libmachine: Decoding PEM data...
	I0819 04:24:41.845085    4655 main.go:141] libmachine: Parsing certificate...
	I0819 04:24:41.845610    4655 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19476-967/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19476-967/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:24:42.003938    4655 main.go:141] libmachine: Creating SSH key...
	I0819 04:24:42.072612    4655 main.go:141] libmachine: Creating Disk image...
	I0819 04:24:42.072621    4655 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:24:42.072795    4655 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19476-967/.minikube/machines/bridge-745000/disk.qcow2.raw /Users/jenkins/minikube-integration/19476-967/.minikube/machines/bridge-745000/disk.qcow2
	I0819 04:24:42.082148    4655 main.go:141] libmachine: STDOUT: 
	I0819 04:24:42.082170    4655 main.go:141] libmachine: STDERR: 
	I0819 04:24:42.082224    4655 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/bridge-745000/disk.qcow2 +20000M
	I0819 04:24:42.090307    4655 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:24:42.090324    4655 main.go:141] libmachine: STDERR: 
	I0819 04:24:42.090335    4655 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19476-967/.minikube/machines/bridge-745000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19476-967/.minikube/machines/bridge-745000/disk.qcow2
	I0819 04:24:42.090340    4655 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:24:42.090352    4655 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:24:42.090395    4655 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/bridge-745000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/bridge-745000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/bridge-745000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:fd:0b:c0:8d:44 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/bridge-745000/disk.qcow2
	I0819 04:24:42.092056    4655 main.go:141] libmachine: STDOUT: 
	I0819 04:24:42.092072    4655 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:24:42.092085    4655 client.go:171] duration metric: took 247.3885ms to LocalClient.Create
	I0819 04:24:44.094190    4655 start.go:128] duration metric: took 2.305438458s to createHost
	I0819 04:24:44.094231    4655 start.go:83] releasing machines lock for "bridge-745000", held for 2.305923084s
	W0819 04:24:44.094473    4655 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-745000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-745000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:24:44.103820    4655 out.go:201] 
	W0819 04:24:44.110901    4655 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:24:44.110918    4655 out.go:270] * 
	* 
	W0819 04:24:44.112552    4655 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:24:44.121864    4655 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-745000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-745000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.834731917s)

                                                
                                                
-- stdout --
	* [kindnet-745000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-745000" primary control-plane node in "kindnet-745000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-745000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:24:46.303707    4767 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:24:46.303856    4767 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:24:46.303859    4767 out.go:358] Setting ErrFile to fd 2...
	I0819 04:24:46.303862    4767 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:24:46.303992    4767 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:24:46.305040    4767 out.go:352] Setting JSON to false
	I0819 04:24:46.321413    4767 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3249,"bootTime":1724063437,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0819 04:24:46.321522    4767 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:24:46.327925    4767 out.go:177] * [kindnet-745000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:24:46.334841    4767 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 04:24:46.334869    4767 notify.go:220] Checking for updates...
	I0819 04:24:46.341904    4767 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	I0819 04:24:46.344901    4767 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:24:46.348814    4767 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:24:46.351855    4767 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	I0819 04:24:46.354847    4767 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:24:46.358256    4767 config.go:182] Loaded profile config "multinode-837000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:24:46.358316    4767 config.go:182] Loaded profile config "stopped-upgrade-446000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 04:24:46.358362    4767 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:24:46.362819    4767 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 04:24:46.369856    4767 start.go:297] selected driver: qemu2
	I0819 04:24:46.369866    4767 start.go:901] validating driver "qemu2" against <nil>
	I0819 04:24:46.369874    4767 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:24:46.372021    4767 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 04:24:46.375912    4767 out.go:177] * Automatically selected the socket_vmnet network
	I0819 04:24:46.378906    4767 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 04:24:46.378928    4767 cni.go:84] Creating CNI manager for "kindnet"
	I0819 04:24:46.378937    4767 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 04:24:46.378977    4767 start.go:340] cluster config:
	{Name:kindnet-745000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kindnet-745000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:24:46.382422    4767 iso.go:125] acquiring lock: {Name:mk9bbf20f477d4c64990a7e4e7281f35cf7cfcc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:24:46.388791    4767 out.go:177] * Starting "kindnet-745000" primary control-plane node in "kindnet-745000" cluster
	I0819 04:24:46.392807    4767 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:24:46.392821    4767 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:24:46.392829    4767 cache.go:56] Caching tarball of preloaded images
	I0819 04:24:46.392886    4767 preload.go:172] Found /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:24:46.392891    4767 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:24:46.392957    4767 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/kindnet-745000/config.json ...
	I0819 04:24:46.392969    4767 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/kindnet-745000/config.json: {Name:mkd939e0953fe5eb24060f0aed6a61e0f7d883b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:24:46.393271    4767 start.go:360] acquireMachinesLock for kindnet-745000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:24:46.393303    4767 start.go:364] duration metric: took 27.208µs to acquireMachinesLock for "kindnet-745000"
	I0819 04:24:46.393315    4767 start.go:93] Provisioning new machine with config: &{Name:kindnet-745000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kindnet-745000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:24:46.393350    4767 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:24:46.401850    4767 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 04:24:46.418110    4767 start.go:159] libmachine.API.Create for "kindnet-745000" (driver="qemu2")
	I0819 04:24:46.418134    4767 client.go:168] LocalClient.Create starting
	I0819 04:24:46.418197    4767 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem
	I0819 04:24:46.418227    4767 main.go:141] libmachine: Decoding PEM data...
	I0819 04:24:46.418237    4767 main.go:141] libmachine: Parsing certificate...
	I0819 04:24:46.418275    4767 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/cert.pem
	I0819 04:24:46.418297    4767 main.go:141] libmachine: Decoding PEM data...
	I0819 04:24:46.418309    4767 main.go:141] libmachine: Parsing certificate...
	I0819 04:24:46.418701    4767 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19476-967/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19476-967/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:24:46.569114    4767 main.go:141] libmachine: Creating SSH key...
	I0819 04:24:46.628648    4767 main.go:141] libmachine: Creating Disk image...
	I0819 04:24:46.628654    4767 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:24:46.628827    4767 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kindnet-745000/disk.qcow2.raw /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kindnet-745000/disk.qcow2
	I0819 04:24:46.639076    4767 main.go:141] libmachine: STDOUT: 
	I0819 04:24:46.639120    4767 main.go:141] libmachine: STDERR: 
	I0819 04:24:46.639189    4767 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kindnet-745000/disk.qcow2 +20000M
	I0819 04:24:46.648717    4767 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:24:46.648752    4767 main.go:141] libmachine: STDERR: 
	I0819 04:24:46.648768    4767 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kindnet-745000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kindnet-745000/disk.qcow2
	I0819 04:24:46.648775    4767 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:24:46.648785    4767 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:24:46.648811    4767 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kindnet-745000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/kindnet-745000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kindnet-745000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:6a:f2:ad:af:be -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kindnet-745000/disk.qcow2
	I0819 04:24:46.651058    4767 main.go:141] libmachine: STDOUT: 
	I0819 04:24:46.651083    4767 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:24:46.651117    4767 client.go:171] duration metric: took 232.978166ms to LocalClient.Create
	I0819 04:24:48.653187    4767 start.go:128] duration metric: took 2.259851875s to createHost
	I0819 04:24:48.653205    4767 start.go:83] releasing machines lock for "kindnet-745000", held for 2.259926s
	W0819 04:24:48.653218    4767 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:24:48.657706    4767 out.go:177] * Deleting "kindnet-745000" in qemu2 ...
	W0819 04:24:48.670353    4767 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:24:48.670367    4767 start.go:729] Will try again in 5 seconds ...
	I0819 04:24:53.672553    4767 start.go:360] acquireMachinesLock for kindnet-745000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:24:53.673143    4767 start.go:364] duration metric: took 490.5µs to acquireMachinesLock for "kindnet-745000"
	I0819 04:24:53.673264    4767 start.go:93] Provisioning new machine with config: &{Name:kindnet-745000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kindnet-745000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:24:53.673511    4767 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:24:53.679121    4767 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 04:24:53.725603    4767 start.go:159] libmachine.API.Create for "kindnet-745000" (driver="qemu2")
	I0819 04:24:53.725653    4767 client.go:168] LocalClient.Create starting
	I0819 04:24:53.725801    4767 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem
	I0819 04:24:53.725870    4767 main.go:141] libmachine: Decoding PEM data...
	I0819 04:24:53.725884    4767 main.go:141] libmachine: Parsing certificate...
	I0819 04:24:53.725954    4767 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/cert.pem
	I0819 04:24:53.726000    4767 main.go:141] libmachine: Decoding PEM data...
	I0819 04:24:53.726015    4767 main.go:141] libmachine: Parsing certificate...
	I0819 04:24:53.726976    4767 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19476-967/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19476-967/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:24:53.884422    4767 main.go:141] libmachine: Creating SSH key...
	I0819 04:24:54.040256    4767 main.go:141] libmachine: Creating Disk image...
	I0819 04:24:54.040264    4767 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:24:54.040495    4767 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kindnet-745000/disk.qcow2.raw /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kindnet-745000/disk.qcow2
	I0819 04:24:54.050422    4767 main.go:141] libmachine: STDOUT: 
	I0819 04:24:54.050442    4767 main.go:141] libmachine: STDERR: 
	I0819 04:24:54.050499    4767 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kindnet-745000/disk.qcow2 +20000M
	I0819 04:24:54.058601    4767 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:24:54.058617    4767 main.go:141] libmachine: STDERR: 
	I0819 04:24:54.058631    4767 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kindnet-745000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kindnet-745000/disk.qcow2
	I0819 04:24:54.058637    4767 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:24:54.058645    4767 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:24:54.058667    4767 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kindnet-745000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/kindnet-745000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kindnet-745000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:2e:9c:64:e4:a9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kindnet-745000/disk.qcow2
	I0819 04:24:54.060212    4767 main.go:141] libmachine: STDOUT: 
	I0819 04:24:54.060230    4767 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:24:54.060243    4767 client.go:171] duration metric: took 334.58825ms to LocalClient.Create
	I0819 04:24:56.062437    4767 start.go:128] duration metric: took 2.388918292s to createHost
	I0819 04:24:56.062548    4767 start.go:83] releasing machines lock for "kindnet-745000", held for 2.389410042s
	W0819 04:24:56.062908    4767 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-745000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-745000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:24:56.078633    4767 out.go:201] 
	W0819 04:24:56.081782    4767 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:24:56.081805    4767 out.go:270] * 
	* 
	W0819 04:24:56.084409    4767 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:24:56.096646    4767 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-745000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-745000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.858855333s)

                                                
                                                
-- stdout --
	* [kubenet-745000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-745000" primary control-plane node in "kubenet-745000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-745000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:24:58.443824    4885 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:24:58.443967    4885 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:24:58.443975    4885 out.go:358] Setting ErrFile to fd 2...
	I0819 04:24:58.443979    4885 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:24:58.444103    4885 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:24:58.445257    4885 out.go:352] Setting JSON to false
	I0819 04:24:58.461509    4885 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3261,"bootTime":1724063437,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0819 04:24:58.461588    4885 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:24:58.467992    4885 out.go:177] * [kubenet-745000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:24:58.475897    4885 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 04:24:58.475958    4885 notify.go:220] Checking for updates...
	I0819 04:24:58.483997    4885 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	I0819 04:24:58.486875    4885 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:24:58.490013    4885 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:24:58.492994    4885 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	I0819 04:24:58.494562    4885 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:24:58.498362    4885 config.go:182] Loaded profile config "multinode-837000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:24:58.498429    4885 config.go:182] Loaded profile config "stopped-upgrade-446000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 04:24:58.498494    4885 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:24:58.502913    4885 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 04:24:58.507932    4885 start.go:297] selected driver: qemu2
	I0819 04:24:58.507938    4885 start.go:901] validating driver "qemu2" against <nil>
	I0819 04:24:58.507944    4885 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:24:58.510072    4885 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 04:24:58.512956    4885 out.go:177] * Automatically selected the socket_vmnet network
	I0819 04:24:58.516030    4885 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 04:24:58.516047    4885 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0819 04:24:58.516068    4885 start.go:340] cluster config:
	{Name:kubenet-745000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kubenet-745000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:24:58.519579    4885 iso.go:125] acquiring lock: {Name:mk9bbf20f477d4c64990a7e4e7281f35cf7cfcc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:24:58.526940    4885 out.go:177] * Starting "kubenet-745000" primary control-plane node in "kubenet-745000" cluster
	I0819 04:24:58.530879    4885 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:24:58.530896    4885 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:24:58.530906    4885 cache.go:56] Caching tarball of preloaded images
	I0819 04:24:58.530964    4885 preload.go:172] Found /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:24:58.530969    4885 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:24:58.531042    4885 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/kubenet-745000/config.json ...
	I0819 04:24:58.531053    4885 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/kubenet-745000/config.json: {Name:mkaa3f9e8c5be87ca8a889fd8e8928dfce68e570 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:24:58.531262    4885 start.go:360] acquireMachinesLock for kubenet-745000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:24:58.531293    4885 start.go:364] duration metric: took 25.166µs to acquireMachinesLock for "kubenet-745000"
	I0819 04:24:58.531305    4885 start.go:93] Provisioning new machine with config: &{Name:kubenet-745000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kubenet-745000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:24:58.531327    4885 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:24:58.537856    4885 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 04:24:58.552906    4885 start.go:159] libmachine.API.Create for "kubenet-745000" (driver="qemu2")
	I0819 04:24:58.552931    4885 client.go:168] LocalClient.Create starting
	I0819 04:24:58.552989    4885 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem
	I0819 04:24:58.553031    4885 main.go:141] libmachine: Decoding PEM data...
	I0819 04:24:58.553044    4885 main.go:141] libmachine: Parsing certificate...
	I0819 04:24:58.553080    4885 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/cert.pem
	I0819 04:24:58.553105    4885 main.go:141] libmachine: Decoding PEM data...
	I0819 04:24:58.553118    4885 main.go:141] libmachine: Parsing certificate...
	I0819 04:24:58.553454    4885 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19476-967/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19476-967/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:24:58.705719    4885 main.go:141] libmachine: Creating SSH key...
	I0819 04:24:58.752957    4885 main.go:141] libmachine: Creating Disk image...
	I0819 04:24:58.752971    4885 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:24:58.753186    4885 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kubenet-745000/disk.qcow2.raw /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kubenet-745000/disk.qcow2
	I0819 04:24:58.762894    4885 main.go:141] libmachine: STDOUT: 
	I0819 04:24:58.762916    4885 main.go:141] libmachine: STDERR: 
	I0819 04:24:58.762975    4885 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kubenet-745000/disk.qcow2 +20000M
	I0819 04:24:58.771005    4885 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:24:58.771028    4885 main.go:141] libmachine: STDERR: 
	I0819 04:24:58.771041    4885 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kubenet-745000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kubenet-745000/disk.qcow2
	I0819 04:24:58.771047    4885 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:24:58.771058    4885 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:24:58.771083    4885 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kubenet-745000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/kubenet-745000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kubenet-745000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:2b:8b:6a:8b:4d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kubenet-745000/disk.qcow2
	I0819 04:24:58.772672    4885 main.go:141] libmachine: STDOUT: 
	I0819 04:24:58.772689    4885 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:24:58.772707    4885 client.go:171] duration metric: took 219.77525ms to LocalClient.Create
	I0819 04:25:00.774850    4885 start.go:128] duration metric: took 2.243529917s to createHost
	I0819 04:25:00.774924    4885 start.go:83] releasing machines lock for "kubenet-745000", held for 2.24364125s
	W0819 04:25:00.774992    4885 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:25:00.790490    4885 out.go:177] * Deleting "kubenet-745000" in qemu2 ...
	W0819 04:25:00.806605    4885 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:25:00.806625    4885 start.go:729] Will try again in 5 seconds ...
	I0819 04:25:05.808883    4885 start.go:360] acquireMachinesLock for kubenet-745000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:25:05.809455    4885 start.go:364] duration metric: took 469.542µs to acquireMachinesLock for "kubenet-745000"
	I0819 04:25:05.809552    4885 start.go:93] Provisioning new machine with config: &{Name:kubenet-745000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kubenet-745000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:25:05.809839    4885 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:25:05.818526    4885 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 04:25:05.869080    4885 start.go:159] libmachine.API.Create for "kubenet-745000" (driver="qemu2")
	I0819 04:25:05.869140    4885 client.go:168] LocalClient.Create starting
	I0819 04:25:05.869274    4885 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem
	I0819 04:25:05.869349    4885 main.go:141] libmachine: Decoding PEM data...
	I0819 04:25:05.869377    4885 main.go:141] libmachine: Parsing certificate...
	I0819 04:25:05.869444    4885 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/cert.pem
	I0819 04:25:05.869492    4885 main.go:141] libmachine: Decoding PEM data...
	I0819 04:25:05.869506    4885 main.go:141] libmachine: Parsing certificate...
	I0819 04:25:05.870233    4885 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19476-967/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19476-967/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:25:06.026948    4885 main.go:141] libmachine: Creating SSH key...
	I0819 04:25:06.210901    4885 main.go:141] libmachine: Creating Disk image...
	I0819 04:25:06.210909    4885 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:25:06.211122    4885 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kubenet-745000/disk.qcow2.raw /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kubenet-745000/disk.qcow2
	I0819 04:25:06.220836    4885 main.go:141] libmachine: STDOUT: 
	I0819 04:25:06.220854    4885 main.go:141] libmachine: STDERR: 
	I0819 04:25:06.220899    4885 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kubenet-745000/disk.qcow2 +20000M
	I0819 04:25:06.228767    4885 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:25:06.228782    4885 main.go:141] libmachine: STDERR: 
	I0819 04:25:06.228795    4885 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kubenet-745000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kubenet-745000/disk.qcow2
	I0819 04:25:06.228800    4885 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:25:06.228812    4885 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:25:06.228846    4885 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kubenet-745000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/kubenet-745000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kubenet-745000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:21:3d:90:7a:52 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/kubenet-745000/disk.qcow2
	I0819 04:25:06.230488    4885 main.go:141] libmachine: STDOUT: 
	I0819 04:25:06.230505    4885 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:25:06.230521    4885 client.go:171] duration metric: took 361.379917ms to LocalClient.Create
	I0819 04:25:08.232702    4885 start.go:128] duration metric: took 2.422854375s to createHost
	I0819 04:25:08.232762    4885 start.go:83] releasing machines lock for "kubenet-745000", held for 2.423313625s
	W0819 04:25:08.233078    4885 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-745000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-745000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:25:08.241601    4885 out.go:201] 
	W0819 04:25:08.248857    4885 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:25:08.248951    4885 out.go:270] * 
	* 
	W0819 04:25:08.251975    4885 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:25:08.265639    4885 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-745000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
E0819 04:25:18.676915    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/functional-522000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-745000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.850292458s)

                                                
                                                
-- stdout --
	* [custom-flannel-745000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-745000" primary control-plane node in "custom-flannel-745000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-745000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:25:10.457633    4994 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:25:10.457772    4994 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:25:10.457775    4994 out.go:358] Setting ErrFile to fd 2...
	I0819 04:25:10.457778    4994 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:25:10.457908    4994 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:25:10.459027    4994 out.go:352] Setting JSON to false
	I0819 04:25:10.475322    4994 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3273,"bootTime":1724063437,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0819 04:25:10.475391    4994 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:25:10.481904    4994 out.go:177] * [custom-flannel-745000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:25:10.489920    4994 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 04:25:10.489958    4994 notify.go:220] Checking for updates...
	I0819 04:25:10.495817    4994 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	I0819 04:25:10.498818    4994 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:25:10.501859    4994 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:25:10.504833    4994 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	I0819 04:25:10.507815    4994 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:25:10.511139    4994 config.go:182] Loaded profile config "multinode-837000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:25:10.511205    4994 config.go:182] Loaded profile config "stopped-upgrade-446000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 04:25:10.511250    4994 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:25:10.515891    4994 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 04:25:10.522831    4994 start.go:297] selected driver: qemu2
	I0819 04:25:10.522837    4994 start.go:901] validating driver "qemu2" against <nil>
	I0819 04:25:10.522843    4994 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:25:10.524858    4994 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 04:25:10.527877    4994 out.go:177] * Automatically selected the socket_vmnet network
	I0819 04:25:10.530847    4994 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 04:25:10.530889    4994 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0819 04:25:10.530896    4994 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0819 04:25:10.530922    4994 start.go:340] cluster config:
	{Name:custom-flannel-745000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-745000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:25:10.534422    4994 iso.go:125] acquiring lock: {Name:mk9bbf20f477d4c64990a7e4e7281f35cf7cfcc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:25:10.541876    4994 out.go:177] * Starting "custom-flannel-745000" primary control-plane node in "custom-flannel-745000" cluster
	I0819 04:25:10.545846    4994 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:25:10.545864    4994 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:25:10.545872    4994 cache.go:56] Caching tarball of preloaded images
	I0819 04:25:10.545935    4994 preload.go:172] Found /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:25:10.545940    4994 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:25:10.545999    4994 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/custom-flannel-745000/config.json ...
	I0819 04:25:10.546009    4994 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/custom-flannel-745000/config.json: {Name:mk94a153755221a5e6eb481afc926a4e25ee26bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:25:10.546287    4994 start.go:360] acquireMachinesLock for custom-flannel-745000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:25:10.546321    4994 start.go:364] duration metric: took 27.667µs to acquireMachinesLock for "custom-flannel-745000"
	I0819 04:25:10.546333    4994 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-745000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-745000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:25:10.546367    4994 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:25:10.553777    4994 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 04:25:10.569693    4994 start.go:159] libmachine.API.Create for "custom-flannel-745000" (driver="qemu2")
	I0819 04:25:10.569728    4994 client.go:168] LocalClient.Create starting
	I0819 04:25:10.569795    4994 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem
	I0819 04:25:10.569826    4994 main.go:141] libmachine: Decoding PEM data...
	I0819 04:25:10.569835    4994 main.go:141] libmachine: Parsing certificate...
	I0819 04:25:10.569879    4994 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/cert.pem
	I0819 04:25:10.569903    4994 main.go:141] libmachine: Decoding PEM data...
	I0819 04:25:10.569910    4994 main.go:141] libmachine: Parsing certificate...
	I0819 04:25:10.570246    4994 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19476-967/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19476-967/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:25:10.731760    4994 main.go:141] libmachine: Creating SSH key...
	I0819 04:25:10.766312    4994 main.go:141] libmachine: Creating Disk image...
	I0819 04:25:10.766316    4994 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:25:10.766488    4994 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19476-967/.minikube/machines/custom-flannel-745000/disk.qcow2.raw /Users/jenkins/minikube-integration/19476-967/.minikube/machines/custom-flannel-745000/disk.qcow2
	I0819 04:25:10.776008    4994 main.go:141] libmachine: STDOUT: 
	I0819 04:25:10.776025    4994 main.go:141] libmachine: STDERR: 
	I0819 04:25:10.776069    4994 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/custom-flannel-745000/disk.qcow2 +20000M
	I0819 04:25:10.784003    4994 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:25:10.784018    4994 main.go:141] libmachine: STDERR: 
	I0819 04:25:10.784037    4994 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19476-967/.minikube/machines/custom-flannel-745000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19476-967/.minikube/machines/custom-flannel-745000/disk.qcow2
	I0819 04:25:10.784049    4994 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:25:10.784062    4994 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:25:10.784086    4994 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/custom-flannel-745000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/custom-flannel-745000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/custom-flannel-745000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:e6:61:95:c0:8a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/custom-flannel-745000/disk.qcow2
	I0819 04:25:10.785688    4994 main.go:141] libmachine: STDOUT: 
	I0819 04:25:10.785702    4994 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:25:10.785720    4994 client.go:171] duration metric: took 215.989459ms to LocalClient.Create
	I0819 04:25:12.787893    4994 start.go:128] duration metric: took 2.2415315s to createHost
	I0819 04:25:12.787971    4994 start.go:83] releasing machines lock for "custom-flannel-745000", held for 2.241669458s
	W0819 04:25:12.788046    4994 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:25:12.795478    4994 out.go:177] * Deleting "custom-flannel-745000" in qemu2 ...
	W0819 04:25:12.825912    4994 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:25:12.825948    4994 start.go:729] Will try again in 5 seconds ...
	I0819 04:25:17.828163    4994 start.go:360] acquireMachinesLock for custom-flannel-745000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:25:17.828836    4994 start.go:364] duration metric: took 500.667µs to acquireMachinesLock for "custom-flannel-745000"
	I0819 04:25:17.829022    4994 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-745000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-745000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:25:17.829307    4994 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:25:17.836892    4994 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 04:25:17.878435    4994 start.go:159] libmachine.API.Create for "custom-flannel-745000" (driver="qemu2")
	I0819 04:25:17.878490    4994 client.go:168] LocalClient.Create starting
	I0819 04:25:17.878595    4994 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem
	I0819 04:25:17.878661    4994 main.go:141] libmachine: Decoding PEM data...
	I0819 04:25:17.878678    4994 main.go:141] libmachine: Parsing certificate...
	I0819 04:25:17.878737    4994 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/cert.pem
	I0819 04:25:17.878775    4994 main.go:141] libmachine: Decoding PEM data...
	I0819 04:25:17.878787    4994 main.go:141] libmachine: Parsing certificate...
	I0819 04:25:17.879254    4994 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19476-967/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19476-967/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:25:18.034984    4994 main.go:141] libmachine: Creating SSH key...
	I0819 04:25:18.217397    4994 main.go:141] libmachine: Creating Disk image...
	I0819 04:25:18.217406    4994 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:25:18.217636    4994 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19476-967/.minikube/machines/custom-flannel-745000/disk.qcow2.raw /Users/jenkins/minikube-integration/19476-967/.minikube/machines/custom-flannel-745000/disk.qcow2
	I0819 04:25:18.227425    4994 main.go:141] libmachine: STDOUT: 
	I0819 04:25:18.227448    4994 main.go:141] libmachine: STDERR: 
	I0819 04:25:18.227503    4994 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/custom-flannel-745000/disk.qcow2 +20000M
	I0819 04:25:18.235939    4994 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:25:18.235955    4994 main.go:141] libmachine: STDERR: 
	I0819 04:25:18.235966    4994 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19476-967/.minikube/machines/custom-flannel-745000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19476-967/.minikube/machines/custom-flannel-745000/disk.qcow2
	I0819 04:25:18.235971    4994 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:25:18.235982    4994 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:25:18.236036    4994 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/custom-flannel-745000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/custom-flannel-745000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/custom-flannel-745000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:4d:43:68:78:2e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/custom-flannel-745000/disk.qcow2
	I0819 04:25:18.237731    4994 main.go:141] libmachine: STDOUT: 
	I0819 04:25:18.237747    4994 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:25:18.237758    4994 client.go:171] duration metric: took 359.264166ms to LocalClient.Create
	I0819 04:25:20.239958    4994 start.go:128] duration metric: took 2.4105995s to createHost
	I0819 04:25:20.240043    4994 start.go:83] releasing machines lock for "custom-flannel-745000", held for 2.411189667s
	W0819 04:25:20.240568    4994 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-745000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-745000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:25:20.249155    4994 out.go:201] 
	W0819 04:25:20.253163    4994 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:25:20.253193    4994 out.go:270] * 
	* 
	W0819 04:25:20.255839    4994 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:25:20.265171    4994 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-745000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-745000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.767188916s)

                                                
                                                
-- stdout --
	* [calico-745000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-745000" primary control-plane node in "calico-745000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-745000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:25:22.662987    5111 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:25:22.663119    5111 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:25:22.663122    5111 out.go:358] Setting ErrFile to fd 2...
	I0819 04:25:22.663124    5111 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:25:22.663235    5111 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:25:22.664422    5111 out.go:352] Setting JSON to false
	I0819 04:25:22.680900    5111 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3285,"bootTime":1724063437,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0819 04:25:22.680968    5111 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:25:22.688247    5111 out.go:177] * [calico-745000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:25:22.696205    5111 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 04:25:22.696251    5111 notify.go:220] Checking for updates...
	I0819 04:25:22.703242    5111 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	I0819 04:25:22.706206    5111 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:25:22.709273    5111 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:25:22.712439    5111 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	I0819 04:25:22.715264    5111 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:25:22.718634    5111 config.go:182] Loaded profile config "multinode-837000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:25:22.718700    5111 config.go:182] Loaded profile config "stopped-upgrade-446000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 04:25:22.718768    5111 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:25:22.723266    5111 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 04:25:22.730222    5111 start.go:297] selected driver: qemu2
	I0819 04:25:22.730228    5111 start.go:901] validating driver "qemu2" against <nil>
	I0819 04:25:22.730235    5111 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:25:22.732443    5111 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 04:25:22.735222    5111 out.go:177] * Automatically selected the socket_vmnet network
	I0819 04:25:22.738203    5111 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 04:25:22.738223    5111 cni.go:84] Creating CNI manager for "calico"
	I0819 04:25:22.738226    5111 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0819 04:25:22.738253    5111 start.go:340] cluster config:
	{Name:calico-745000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:calico-745000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:25:22.741630    5111 iso.go:125] acquiring lock: {Name:mk9bbf20f477d4c64990a7e4e7281f35cf7cfcc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:25:22.749261    5111 out.go:177] * Starting "calico-745000" primary control-plane node in "calico-745000" cluster
	I0819 04:25:22.753255    5111 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:25:22.753272    5111 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:25:22.753286    5111 cache.go:56] Caching tarball of preloaded images
	I0819 04:25:22.753362    5111 preload.go:172] Found /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:25:22.753367    5111 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:25:22.753431    5111 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/calico-745000/config.json ...
	I0819 04:25:22.753447    5111 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/calico-745000/config.json: {Name:mkb04e434877a03967398ca5dd61c3a1f05c7027 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:25:22.753852    5111 start.go:360] acquireMachinesLock for calico-745000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:25:22.753880    5111 start.go:364] duration metric: took 23.458µs to acquireMachinesLock for "calico-745000"
	I0819 04:25:22.753893    5111 start.go:93] Provisioning new machine with config: &{Name:calico-745000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:calico-745000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:25:22.753916    5111 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:25:22.762219    5111 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 04:25:22.777246    5111 start.go:159] libmachine.API.Create for "calico-745000" (driver="qemu2")
	I0819 04:25:22.777274    5111 client.go:168] LocalClient.Create starting
	I0819 04:25:22.777346    5111 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem
	I0819 04:25:22.777381    5111 main.go:141] libmachine: Decoding PEM data...
	I0819 04:25:22.777390    5111 main.go:141] libmachine: Parsing certificate...
	I0819 04:25:22.777429    5111 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/cert.pem
	I0819 04:25:22.777455    5111 main.go:141] libmachine: Decoding PEM data...
	I0819 04:25:22.777466    5111 main.go:141] libmachine: Parsing certificate...
	I0819 04:25:22.777972    5111 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19476-967/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19476-967/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:25:22.927618    5111 main.go:141] libmachine: Creating SSH key...
	I0819 04:25:23.000136    5111 main.go:141] libmachine: Creating Disk image...
	I0819 04:25:23.000148    5111 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:25:23.000357    5111 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19476-967/.minikube/machines/calico-745000/disk.qcow2.raw /Users/jenkins/minikube-integration/19476-967/.minikube/machines/calico-745000/disk.qcow2
	I0819 04:25:23.009975    5111 main.go:141] libmachine: STDOUT: 
	I0819 04:25:23.010009    5111 main.go:141] libmachine: STDERR: 
	I0819 04:25:23.010053    5111 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/calico-745000/disk.qcow2 +20000M
	I0819 04:25:23.018015    5111 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:25:23.018036    5111 main.go:141] libmachine: STDERR: 
	I0819 04:25:23.018049    5111 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19476-967/.minikube/machines/calico-745000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19476-967/.minikube/machines/calico-745000/disk.qcow2
	I0819 04:25:23.018054    5111 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:25:23.018067    5111 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:25:23.018091    5111 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/calico-745000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/calico-745000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/calico-745000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:94:a5:04:4b:39 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/calico-745000/disk.qcow2
	I0819 04:25:23.019692    5111 main.go:141] libmachine: STDOUT: 
	I0819 04:25:23.019707    5111 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:25:23.019725    5111 client.go:171] duration metric: took 242.450667ms to LocalClient.Create
	I0819 04:25:25.021787    5111 start.go:128] duration metric: took 2.267891s to createHost
	I0819 04:25:25.021808    5111 start.go:83] releasing machines lock for "calico-745000", held for 2.267950834s
	W0819 04:25:25.021847    5111 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:25:25.032250    5111 out.go:177] * Deleting "calico-745000" in qemu2 ...
	W0819 04:25:25.048026    5111 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:25:25.048036    5111 start.go:729] Will try again in 5 seconds ...
	I0819 04:25:30.048986    5111 start.go:360] acquireMachinesLock for calico-745000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:25:30.049539    5111 start.go:364] duration metric: took 432.75µs to acquireMachinesLock for "calico-745000"
	I0819 04:25:30.049622    5111 start.go:93] Provisioning new machine with config: &{Name:calico-745000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:calico-745000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:25:30.049875    5111 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:25:30.057167    5111 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 04:25:30.096749    5111 start.go:159] libmachine.API.Create for "calico-745000" (driver="qemu2")
	I0819 04:25:30.096798    5111 client.go:168] LocalClient.Create starting
	I0819 04:25:30.096911    5111 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem
	I0819 04:25:30.096969    5111 main.go:141] libmachine: Decoding PEM data...
	I0819 04:25:30.096986    5111 main.go:141] libmachine: Parsing certificate...
	I0819 04:25:30.097048    5111 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/cert.pem
	I0819 04:25:30.097087    5111 main.go:141] libmachine: Decoding PEM data...
	I0819 04:25:30.097100    5111 main.go:141] libmachine: Parsing certificate...
	I0819 04:25:30.097652    5111 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19476-967/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19476-967/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:25:30.252802    5111 main.go:141] libmachine: Creating SSH key...
	I0819 04:25:30.334664    5111 main.go:141] libmachine: Creating Disk image...
	I0819 04:25:30.334674    5111 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:25:30.334845    5111 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19476-967/.minikube/machines/calico-745000/disk.qcow2.raw /Users/jenkins/minikube-integration/19476-967/.minikube/machines/calico-745000/disk.qcow2
	I0819 04:25:30.344422    5111 main.go:141] libmachine: STDOUT: 
	I0819 04:25:30.344440    5111 main.go:141] libmachine: STDERR: 
	I0819 04:25:30.344500    5111 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/calico-745000/disk.qcow2 +20000M
	I0819 04:25:30.352755    5111 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:25:30.352771    5111 main.go:141] libmachine: STDERR: 
	I0819 04:25:30.352782    5111 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19476-967/.minikube/machines/calico-745000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19476-967/.minikube/machines/calico-745000/disk.qcow2
	I0819 04:25:30.352785    5111 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:25:30.352798    5111 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:25:30.352831    5111 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/calico-745000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/calico-745000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/calico-745000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:db:73:5a:71:44 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/calico-745000/disk.qcow2
	I0819 04:25:30.354537    5111 main.go:141] libmachine: STDOUT: 
	I0819 04:25:30.354552    5111 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:25:30.354565    5111 client.go:171] duration metric: took 257.765292ms to LocalClient.Create
	I0819 04:25:32.356825    5111 start.go:128] duration metric: took 2.306852459s to createHost
	I0819 04:25:32.356901    5111 start.go:83] releasing machines lock for "calico-745000", held for 2.30735075s
	W0819 04:25:32.357293    5111 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-745000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-745000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:25:32.367039    5111 out.go:201] 
	W0819 04:25:32.375119    5111 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:25:32.375147    5111 out.go:270] * 
	* 
	W0819 04:25:32.377852    5111 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:25:32.387996    5111 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-745000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-745000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.7936695s)

                                                
                                                
-- stdout --
	* [false-745000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-745000" primary control-plane node in "false-745000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-745000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:25:34.825054    5232 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:25:34.825216    5232 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:25:34.825219    5232 out.go:358] Setting ErrFile to fd 2...
	I0819 04:25:34.825222    5232 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:25:34.825370    5232 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:25:34.826732    5232 out.go:352] Setting JSON to false
	I0819 04:25:34.845125    5232 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3297,"bootTime":1724063437,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0819 04:25:34.845221    5232 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:25:34.850718    5232 out.go:177] * [false-745000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:25:34.857744    5232 notify.go:220] Checking for updates...
	I0819 04:25:34.861725    5232 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 04:25:34.872677    5232 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	I0819 04:25:34.880729    5232 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:25:34.883657    5232 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:25:34.886685    5232 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	I0819 04:25:34.889706    5232 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:25:34.892951    5232 config.go:182] Loaded profile config "multinode-837000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:25:34.893016    5232 config.go:182] Loaded profile config "stopped-upgrade-446000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 04:25:34.893059    5232 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:25:34.897673    5232 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 04:25:34.904684    5232 start.go:297] selected driver: qemu2
	I0819 04:25:34.904691    5232 start.go:901] validating driver "qemu2" against <nil>
	I0819 04:25:34.904706    5232 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:25:34.906892    5232 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 04:25:34.909693    5232 out.go:177] * Automatically selected the socket_vmnet network
	I0819 04:25:34.912767    5232 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 04:25:34.912818    5232 cni.go:84] Creating CNI manager for "false"
	I0819 04:25:34.912856    5232 start.go:340] cluster config:
	{Name:false-745000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:false-745000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:25:34.916409    5232 iso.go:125] acquiring lock: {Name:mk9bbf20f477d4c64990a7e4e7281f35cf7cfcc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:25:34.919710    5232 out.go:177] * Starting "false-745000" primary control-plane node in "false-745000" cluster
	I0819 04:25:34.923666    5232 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:25:34.923679    5232 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:25:34.923687    5232 cache.go:56] Caching tarball of preloaded images
	I0819 04:25:34.923735    5232 preload.go:172] Found /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:25:34.923740    5232 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:25:34.923791    5232 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/false-745000/config.json ...
	I0819 04:25:34.923801    5232 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/false-745000/config.json: {Name:mk232227a7d660b181fee931c6df490cf5c8d107 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:25:34.924095    5232 start.go:360] acquireMachinesLock for false-745000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:25:34.924125    5232 start.go:364] duration metric: took 24.917µs to acquireMachinesLock for "false-745000"
	I0819 04:25:34.924136    5232 start.go:93] Provisioning new machine with config: &{Name:false-745000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:false-745000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:25:34.924185    5232 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:25:34.927687    5232 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 04:25:34.943321    5232 start.go:159] libmachine.API.Create for "false-745000" (driver="qemu2")
	I0819 04:25:34.943345    5232 client.go:168] LocalClient.Create starting
	I0819 04:25:34.943403    5232 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem
	I0819 04:25:34.943433    5232 main.go:141] libmachine: Decoding PEM data...
	I0819 04:25:34.943441    5232 main.go:141] libmachine: Parsing certificate...
	I0819 04:25:34.943488    5232 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/cert.pem
	I0819 04:25:34.943515    5232 main.go:141] libmachine: Decoding PEM data...
	I0819 04:25:34.943528    5232 main.go:141] libmachine: Parsing certificate...
	I0819 04:25:34.944018    5232 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19476-967/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19476-967/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:25:35.093069    5232 main.go:141] libmachine: Creating SSH key...
	I0819 04:25:35.179520    5232 main.go:141] libmachine: Creating Disk image...
	I0819 04:25:35.179531    5232 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:25:35.179731    5232 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19476-967/.minikube/machines/false-745000/disk.qcow2.raw /Users/jenkins/minikube-integration/19476-967/.minikube/machines/false-745000/disk.qcow2
	I0819 04:25:35.189336    5232 main.go:141] libmachine: STDOUT: 
	I0819 04:25:35.189357    5232 main.go:141] libmachine: STDERR: 
	I0819 04:25:35.189402    5232 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/false-745000/disk.qcow2 +20000M
	I0819 04:25:35.197340    5232 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:25:35.197362    5232 main.go:141] libmachine: STDERR: 
	I0819 04:25:35.197377    5232 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19476-967/.minikube/machines/false-745000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19476-967/.minikube/machines/false-745000/disk.qcow2
	I0819 04:25:35.197382    5232 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:25:35.197394    5232 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:25:35.197419    5232 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/false-745000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/false-745000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/false-745000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:50:31:c4:5f:9f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/false-745000/disk.qcow2
	I0819 04:25:35.199077    5232 main.go:141] libmachine: STDOUT: 
	I0819 04:25:35.199112    5232 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:25:35.199131    5232 client.go:171] duration metric: took 255.7825ms to LocalClient.Create
	I0819 04:25:37.201199    5232 start.go:128] duration metric: took 2.277032125s to createHost
	I0819 04:25:37.201243    5232 start.go:83] releasing machines lock for "false-745000", held for 2.277142708s
	W0819 04:25:37.201259    5232 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:25:37.204875    5232 out.go:177] * Deleting "false-745000" in qemu2 ...
	W0819 04:25:37.215046    5232 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:25:37.215052    5232 start.go:729] Will try again in 5 seconds ...
	I0819 04:25:42.215513    5232 start.go:360] acquireMachinesLock for false-745000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:25:42.216036    5232 start.go:364] duration metric: took 410.333µs to acquireMachinesLock for "false-745000"
	I0819 04:25:42.216213    5232 start.go:93] Provisioning new machine with config: &{Name:false-745000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:false-745000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:25:42.216481    5232 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:25:42.222075    5232 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 04:25:42.273658    5232 start.go:159] libmachine.API.Create for "false-745000" (driver="qemu2")
	I0819 04:25:42.273707    5232 client.go:168] LocalClient.Create starting
	I0819 04:25:42.273818    5232 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem
	I0819 04:25:42.273878    5232 main.go:141] libmachine: Decoding PEM data...
	I0819 04:25:42.273892    5232 main.go:141] libmachine: Parsing certificate...
	I0819 04:25:42.273952    5232 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/cert.pem
	I0819 04:25:42.273996    5232 main.go:141] libmachine: Decoding PEM data...
	I0819 04:25:42.274006    5232 main.go:141] libmachine: Parsing certificate...
	I0819 04:25:42.274567    5232 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19476-967/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19476-967/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:25:42.435163    5232 main.go:141] libmachine: Creating SSH key...
	I0819 04:25:42.520690    5232 main.go:141] libmachine: Creating Disk image...
	I0819 04:25:42.520696    5232 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:25:42.520873    5232 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19476-967/.minikube/machines/false-745000/disk.qcow2.raw /Users/jenkins/minikube-integration/19476-967/.minikube/machines/false-745000/disk.qcow2
	I0819 04:25:42.530433    5232 main.go:141] libmachine: STDOUT: 
	I0819 04:25:42.530464    5232 main.go:141] libmachine: STDERR: 
	I0819 04:25:42.530510    5232 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/false-745000/disk.qcow2 +20000M
	I0819 04:25:42.538403    5232 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:25:42.538434    5232 main.go:141] libmachine: STDERR: 
	I0819 04:25:42.538443    5232 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19476-967/.minikube/machines/false-745000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19476-967/.minikube/machines/false-745000/disk.qcow2
	I0819 04:25:42.538449    5232 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:25:42.538460    5232 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:25:42.538483    5232 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/false-745000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/false-745000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/false-745000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:8c:94:64:ed:bf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/false-745000/disk.qcow2
	I0819 04:25:42.540189    5232 main.go:141] libmachine: STDOUT: 
	I0819 04:25:42.540213    5232 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:25:42.540225    5232 client.go:171] duration metric: took 266.516334ms to LocalClient.Create
	I0819 04:25:44.542408    5232 start.go:128] duration metric: took 2.325891708s to createHost
	I0819 04:25:44.542515    5232 start.go:83] releasing machines lock for "false-745000", held for 2.326481083s
	W0819 04:25:44.542960    5232 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-745000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-745000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:25:44.551643    5232 out.go:201] 
	W0819 04:25:44.559773    5232 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:25:44.559850    5232 out.go:270] * 
	* 
	W0819 04:25:44.562490    5232 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:25:44.576635    5232 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-971000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-971000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.850074042s)

                                                
                                                
-- stdout --
	* [old-k8s-version-971000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-971000" primary control-plane node in "old-k8s-version-971000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-971000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:25:46.784505    5344 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:25:46.784689    5344 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:25:46.784693    5344 out.go:358] Setting ErrFile to fd 2...
	I0819 04:25:46.784695    5344 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:25:46.784867    5344 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:25:46.786582    5344 out.go:352] Setting JSON to false
	I0819 04:25:46.806079    5344 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3309,"bootTime":1724063437,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0819 04:25:46.806163    5344 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:25:46.809964    5344 out.go:177] * [old-k8s-version-971000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:25:46.819971    5344 notify.go:220] Checking for updates...
	I0819 04:25:46.823958    5344 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 04:25:46.827915    5344 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	I0819 04:25:46.830955    5344 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:25:46.833951    5344 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:25:46.836952    5344 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	I0819 04:25:46.839911    5344 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:25:46.843369    5344 config.go:182] Loaded profile config "multinode-837000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:25:46.843438    5344 config.go:182] Loaded profile config "stopped-upgrade-446000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 04:25:46.843478    5344 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:25:46.847810    5344 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 04:25:46.854954    5344 start.go:297] selected driver: qemu2
	I0819 04:25:46.854962    5344 start.go:901] validating driver "qemu2" against <nil>
	I0819 04:25:46.854969    5344 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:25:46.857130    5344 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 04:25:46.860842    5344 out.go:177] * Automatically selected the socket_vmnet network
	I0819 04:25:46.863937    5344 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 04:25:46.863955    5344 cni.go:84] Creating CNI manager for ""
	I0819 04:25:46.863961    5344 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0819 04:25:46.863980    5344 start.go:340] cluster config:
	{Name:old-k8s-version-971000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-971000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:25:46.867385    5344 iso.go:125] acquiring lock: {Name:mk9bbf20f477d4c64990a7e4e7281f35cf7cfcc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:25:46.873850    5344 out.go:177] * Starting "old-k8s-version-971000" primary control-plane node in "old-k8s-version-971000" cluster
	I0819 04:25:46.877926    5344 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0819 04:25:46.877944    5344 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0819 04:25:46.877952    5344 cache.go:56] Caching tarball of preloaded images
	I0819 04:25:46.878007    5344 preload.go:172] Found /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:25:46.878012    5344 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0819 04:25:46.878066    5344 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/old-k8s-version-971000/config.json ...
	I0819 04:25:46.878076    5344 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/old-k8s-version-971000/config.json: {Name:mka6fb7cf355a7c4471c347a30f94f6269c8ae17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:25:46.878463    5344 start.go:360] acquireMachinesLock for old-k8s-version-971000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:25:46.878501    5344 start.go:364] duration metric: took 31µs to acquireMachinesLock for "old-k8s-version-971000"
	I0819 04:25:46.878514    5344 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-971000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-971000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:25:46.878545    5344 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:25:46.886937    5344 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 04:25:46.902022    5344 start.go:159] libmachine.API.Create for "old-k8s-version-971000" (driver="qemu2")
	I0819 04:25:46.902048    5344 client.go:168] LocalClient.Create starting
	I0819 04:25:46.902110    5344 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem
	I0819 04:25:46.902140    5344 main.go:141] libmachine: Decoding PEM data...
	I0819 04:25:46.902149    5344 main.go:141] libmachine: Parsing certificate...
	I0819 04:25:46.902190    5344 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/cert.pem
	I0819 04:25:46.902213    5344 main.go:141] libmachine: Decoding PEM data...
	I0819 04:25:46.902219    5344 main.go:141] libmachine: Parsing certificate...
	I0819 04:25:46.902702    5344 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19476-967/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19476-967/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:25:47.053264    5344 main.go:141] libmachine: Creating SSH key...
	I0819 04:25:47.090402    5344 main.go:141] libmachine: Creating Disk image...
	I0819 04:25:47.090409    5344 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:25:47.090608    5344 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19476-967/.minikube/machines/old-k8s-version-971000/disk.qcow2.raw /Users/jenkins/minikube-integration/19476-967/.minikube/machines/old-k8s-version-971000/disk.qcow2
	I0819 04:25:47.099821    5344 main.go:141] libmachine: STDOUT: 
	I0819 04:25:47.099841    5344 main.go:141] libmachine: STDERR: 
	I0819 04:25:47.099898    5344 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/old-k8s-version-971000/disk.qcow2 +20000M
	I0819 04:25:47.108573    5344 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:25:47.108600    5344 main.go:141] libmachine: STDERR: 
	I0819 04:25:47.108618    5344 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19476-967/.minikube/machines/old-k8s-version-971000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19476-967/.minikube/machines/old-k8s-version-971000/disk.qcow2
	I0819 04:25:47.108623    5344 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:25:47.108632    5344 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:25:47.108662    5344 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/old-k8s-version-971000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/old-k8s-version-971000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/old-k8s-version-971000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:fd:dd:25:06:7f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/old-k8s-version-971000/disk.qcow2
	I0819 04:25:47.110700    5344 main.go:141] libmachine: STDOUT: 
	I0819 04:25:47.110720    5344 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:25:47.110738    5344 client.go:171] duration metric: took 208.687167ms to LocalClient.Create
	I0819 04:25:49.112908    5344 start.go:128] duration metric: took 2.234361084s to createHost
	I0819 04:25:49.112982    5344 start.go:83] releasing machines lock for "old-k8s-version-971000", held for 2.234497709s
	W0819 04:25:49.113056    5344 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:25:49.124163    5344 out.go:177] * Deleting "old-k8s-version-971000" in qemu2 ...
	W0819 04:25:49.151760    5344 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:25:49.151786    5344 start.go:729] Will try again in 5 seconds ...
	I0819 04:25:54.154018    5344 start.go:360] acquireMachinesLock for old-k8s-version-971000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:25:54.154648    5344 start.go:364] duration metric: took 476.916µs to acquireMachinesLock for "old-k8s-version-971000"
	I0819 04:25:54.154800    5344 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-971000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-971000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:25:54.155101    5344 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:25:54.166646    5344 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 04:25:54.216739    5344 start.go:159] libmachine.API.Create for "old-k8s-version-971000" (driver="qemu2")
	I0819 04:25:54.216796    5344 client.go:168] LocalClient.Create starting
	I0819 04:25:54.216920    5344 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem
	I0819 04:25:54.216979    5344 main.go:141] libmachine: Decoding PEM data...
	I0819 04:25:54.216996    5344 main.go:141] libmachine: Parsing certificate...
	I0819 04:25:54.217050    5344 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/cert.pem
	I0819 04:25:54.217095    5344 main.go:141] libmachine: Decoding PEM data...
	I0819 04:25:54.217107    5344 main.go:141] libmachine: Parsing certificate...
	I0819 04:25:54.217657    5344 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19476-967/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19476-967/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:25:54.379370    5344 main.go:141] libmachine: Creating SSH key...
	I0819 04:25:54.538383    5344 main.go:141] libmachine: Creating Disk image...
	I0819 04:25:54.538390    5344 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:25:54.538605    5344 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19476-967/.minikube/machines/old-k8s-version-971000/disk.qcow2.raw /Users/jenkins/minikube-integration/19476-967/.minikube/machines/old-k8s-version-971000/disk.qcow2
	I0819 04:25:54.548433    5344 main.go:141] libmachine: STDOUT: 
	I0819 04:25:54.548456    5344 main.go:141] libmachine: STDERR: 
	I0819 04:25:54.548507    5344 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/old-k8s-version-971000/disk.qcow2 +20000M
	I0819 04:25:54.556407    5344 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:25:54.556421    5344 main.go:141] libmachine: STDERR: 
	I0819 04:25:54.556431    5344 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19476-967/.minikube/machines/old-k8s-version-971000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19476-967/.minikube/machines/old-k8s-version-971000/disk.qcow2
	I0819 04:25:54.556447    5344 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:25:54.556462    5344 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:25:54.556490    5344 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/old-k8s-version-971000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/old-k8s-version-971000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/old-k8s-version-971000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:0c:4b:b6:d0:18 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/old-k8s-version-971000/disk.qcow2
	I0819 04:25:54.558170    5344 main.go:141] libmachine: STDOUT: 
	I0819 04:25:54.558189    5344 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:25:54.558200    5344 client.go:171] duration metric: took 341.398792ms to LocalClient.Create
	I0819 04:25:56.560385    5344 start.go:128] duration metric: took 2.405277875s to createHost
	I0819 04:25:56.560461    5344 start.go:83] releasing machines lock for "old-k8s-version-971000", held for 2.405818458s
	W0819 04:25:56.560948    5344 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-971000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-971000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:25:56.568399    5344 out.go:201] 
	W0819 04:25:56.576376    5344 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:25:56.576413    5344 out.go:270] * 
	* 
	W0819 04:25:56.579349    5344 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:25:56.588401    5344 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-971000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-971000 -n old-k8s-version-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-971000 -n old-k8s-version-971000: exit status 7 (62.439708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-971000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-971000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-971000 create -f testdata/busybox.yaml: exit status 1 (30.242667ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-971000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-971000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-971000 -n old-k8s-version-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-971000 -n old-k8s-version-971000: exit status 7 (29.246917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-971000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-971000 -n old-k8s-version-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-971000 -n old-k8s-version-971000: exit status 7 (29.64775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-971000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-971000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-971000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-971000 describe deploy/metrics-server -n kube-system: exit status 1 (27.186125ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-971000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-971000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-971000 -n old-k8s-version-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-971000 -n old-k8s-version-971000: exit status 7 (29.32025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-971000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-971000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-971000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.193135208s)

                                                
                                                
-- stdout --
	* [old-k8s-version-971000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-971000" primary control-plane node in "old-k8s-version-971000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-971000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-971000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:26:00.249125    5397 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:26:00.249274    5397 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:26:00.249280    5397 out.go:358] Setting ErrFile to fd 2...
	I0819 04:26:00.249282    5397 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:26:00.249424    5397 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:26:00.250599    5397 out.go:352] Setting JSON to false
	I0819 04:26:00.266960    5397 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3323,"bootTime":1724063437,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0819 04:26:00.267028    5397 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:26:00.271952    5397 out.go:177] * [old-k8s-version-971000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:26:00.278926    5397 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 04:26:00.278994    5397 notify.go:220] Checking for updates...
	I0819 04:26:00.286824    5397 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	I0819 04:26:00.289880    5397 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:26:00.292859    5397 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:26:00.295784    5397 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	I0819 04:26:00.298802    5397 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:26:00.302201    5397 config.go:182] Loaded profile config "old-k8s-version-971000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0819 04:26:00.305868    5397 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0819 04:26:00.308804    5397 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:26:00.312867    5397 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 04:26:00.319867    5397 start.go:297] selected driver: qemu2
	I0819 04:26:00.319876    5397 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-971000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-971000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:26:00.319947    5397 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:26:00.322305    5397 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 04:26:00.322333    5397 cni.go:84] Creating CNI manager for ""
	I0819 04:26:00.322340    5397 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0819 04:26:00.322375    5397 start.go:340] cluster config:
	{Name:old-k8s-version-971000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-971000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:26:00.325834    5397 iso.go:125] acquiring lock: {Name:mk9bbf20f477d4c64990a7e4e7281f35cf7cfcc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:26:00.332788    5397 out.go:177] * Starting "old-k8s-version-971000" primary control-plane node in "old-k8s-version-971000" cluster
	I0819 04:26:00.336888    5397 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0819 04:26:00.336922    5397 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0819 04:26:00.336935    5397 cache.go:56] Caching tarball of preloaded images
	I0819 04:26:00.337012    5397 preload.go:172] Found /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:26:00.337018    5397 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0819 04:26:00.337110    5397 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/old-k8s-version-971000/config.json ...
	I0819 04:26:00.337636    5397 start.go:360] acquireMachinesLock for old-k8s-version-971000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:26:00.337665    5397 start.go:364] duration metric: took 23.125µs to acquireMachinesLock for "old-k8s-version-971000"
	I0819 04:26:00.337679    5397 start.go:96] Skipping create...Using existing machine configuration
	I0819 04:26:00.337686    5397 fix.go:54] fixHost starting: 
	I0819 04:26:00.337801    5397 fix.go:112] recreateIfNeeded on old-k8s-version-971000: state=Stopped err=<nil>
	W0819 04:26:00.337808    5397 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 04:26:00.341739    5397 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-971000" ...
	I0819 04:26:00.348744    5397 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:26:00.348788    5397 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/old-k8s-version-971000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/old-k8s-version-971000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/old-k8s-version-971000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:0c:4b:b6:d0:18 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/old-k8s-version-971000/disk.qcow2
	I0819 04:26:00.350735    5397 main.go:141] libmachine: STDOUT: 
	I0819 04:26:00.350753    5397 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:26:00.350779    5397 fix.go:56] duration metric: took 13.095291ms for fixHost
	I0819 04:26:00.350791    5397 start.go:83] releasing machines lock for "old-k8s-version-971000", held for 13.113333ms
	W0819 04:26:00.350797    5397 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:26:00.350828    5397 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:26:00.350831    5397 start.go:729] Will try again in 5 seconds ...
	I0819 04:26:05.353328    5397 start.go:360] acquireMachinesLock for old-k8s-version-971000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:26:05.353927    5397 start.go:364] duration metric: took 488.625µs to acquireMachinesLock for "old-k8s-version-971000"
	I0819 04:26:05.354096    5397 start.go:96] Skipping create...Using existing machine configuration
	I0819 04:26:05.354117    5397 fix.go:54] fixHost starting: 
	I0819 04:26:05.354831    5397 fix.go:112] recreateIfNeeded on old-k8s-version-971000: state=Stopped err=<nil>
	W0819 04:26:05.354858    5397 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 04:26:05.362717    5397 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-971000" ...
	I0819 04:26:05.366686    5397 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:26:05.366897    5397 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/old-k8s-version-971000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/old-k8s-version-971000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/old-k8s-version-971000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:0c:4b:b6:d0:18 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/old-k8s-version-971000/disk.qcow2
	I0819 04:26:05.375082    5397 main.go:141] libmachine: STDOUT: 
	I0819 04:26:05.375160    5397 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:26:05.375244    5397 fix.go:56] duration metric: took 21.128959ms for fixHost
	I0819 04:26:05.375266    5397 start.go:83] releasing machines lock for "old-k8s-version-971000", held for 21.309708ms
	W0819 04:26:05.375456    5397 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-971000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-971000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:26:05.383681    5397 out.go:201] 
	W0819 04:26:05.389801    5397 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:26:05.389822    5397 out.go:270] * 
	* 
	W0819 04:26:05.391070    5397 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:26:05.404702    5397 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-971000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-971000 -n old-k8s-version-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-971000 -n old-k8s-version-971000: exit status 7 (47.82475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-971000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-971000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-971000 -n old-k8s-version-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-971000 -n old-k8s-version-971000: exit status 7 (30.834958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-971000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-971000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-971000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-971000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.697375ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-971000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-971000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-971000 -n old-k8s-version-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-971000 -n old-k8s-version-971000: exit status 7 (28.766084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-971000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-971000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-971000 -n old-k8s-version-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-971000 -n old-k8s-version-971000: exit status 7 (30.198042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-971000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-971000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-971000 --alsologtostderr -v=1: exit status 83 (41.083792ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-971000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:26:05.650197    5416 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:26:05.650521    5416 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:26:05.650525    5416 out.go:358] Setting ErrFile to fd 2...
	I0819 04:26:05.650527    5416 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:26:05.650671    5416 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:26:05.650867    5416 out.go:352] Setting JSON to false
	I0819 04:26:05.650878    5416 mustload.go:65] Loading cluster: old-k8s-version-971000
	I0819 04:26:05.651082    5416 config.go:182] Loaded profile config "old-k8s-version-971000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0819 04:26:05.655112    5416 out.go:177] * The control-plane node old-k8s-version-971000 host is not running: state=Stopped
	I0819 04:26:05.659071    5416 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-971000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-971000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-971000 -n old-k8s-version-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-971000 -n old-k8s-version-971000: exit status 7 (29.744958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-971000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-971000 -n old-k8s-version-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-971000 -n old-k8s-version-971000: exit status 7 (30.280666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-971000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-752000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-752000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (9.920580666s)

                                                
                                                
-- stdout --
	* [no-preload-752000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-752000" primary control-plane node in "no-preload-752000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-752000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:26:05.961039    5433 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:26:05.961168    5433 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:26:05.961174    5433 out.go:358] Setting ErrFile to fd 2...
	I0819 04:26:05.961177    5433 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:26:05.961320    5433 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:26:05.962538    5433 out.go:352] Setting JSON to false
	I0819 04:26:05.978744    5433 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3328,"bootTime":1724063437,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0819 04:26:05.978877    5433 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:26:05.983642    5433 out.go:177] * [no-preload-752000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:26:05.990415    5433 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 04:26:05.990497    5433 notify.go:220] Checking for updates...
	I0819 04:26:05.997538    5433 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	I0819 04:26:05.998696    5433 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:26:06.001566    5433 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:26:06.004565    5433 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	I0819 04:26:06.007600    5433 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:26:06.010850    5433 config.go:182] Loaded profile config "multinode-837000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:26:06.010907    5433 config.go:182] Loaded profile config "stopped-upgrade-446000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 04:26:06.010959    5433 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:26:06.015539    5433 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 04:26:06.022600    5433 start.go:297] selected driver: qemu2
	I0819 04:26:06.022608    5433 start.go:901] validating driver "qemu2" against <nil>
	I0819 04:26:06.022615    5433 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:26:06.024668    5433 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 04:26:06.027719    5433 out.go:177] * Automatically selected the socket_vmnet network
	I0819 04:26:06.030693    5433 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 04:26:06.030710    5433 cni.go:84] Creating CNI manager for ""
	I0819 04:26:06.030716    5433 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:26:06.030719    5433 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 04:26:06.030748    5433 start.go:340] cluster config:
	{Name:no-preload-752000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-752000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:26:06.034012    5433 iso.go:125] acquiring lock: {Name:mk9bbf20f477d4c64990a7e4e7281f35cf7cfcc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:26:06.041603    5433 out.go:177] * Starting "no-preload-752000" primary control-plane node in "no-preload-752000" cluster
	I0819 04:26:06.044485    5433 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:26:06.044551    5433 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/no-preload-752000/config.json ...
	I0819 04:26:06.044566    5433 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/no-preload-752000/config.json: {Name:mk3cade352e5c7319460db74d655b6efa1ccaa31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:26:06.044567    5433 cache.go:107] acquiring lock: {Name:mk3f3e925478163a3af4d89500c009678704e9a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:26:06.044573    5433 cache.go:107] acquiring lock: {Name:mk6bbecbd4317adaad536be0a5fd80b93128aad0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:26:06.044624    5433 cache.go:115] /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0819 04:26:06.044630    5433 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 67.25µs
	I0819 04:26:06.044635    5433 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0819 04:26:06.044644    5433 cache.go:107] acquiring lock: {Name:mk7db3c7e20acf624ad3ae2a5f14770d5be8a25c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:26:06.044729    5433 cache.go:107] acquiring lock: {Name:mkd2f671faf65141569ccbcf5178c3c872a9aed5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:26:06.044744    5433 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0819 04:26:06.044721    5433 cache.go:107] acquiring lock: {Name:mkbd83dad8c41677b0447c838fbd7689b2eef1be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:26:06.044758    5433 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 04:26:06.044766    5433 cache.go:107] acquiring lock: {Name:mk20c07f2f461dacf0f1baa189a11ab5633354e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:26:06.044756    5433 cache.go:107] acquiring lock: {Name:mk8d81bb8c271ea2abea9b06b94f1e3ae8ef3dab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:26:06.044801    5433 cache.go:107] acquiring lock: {Name:mkea9d38ee0d5cec6a06d1d601ae0c00c9e82155 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:26:06.044903    5433 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0819 04:26:06.044970    5433 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 04:26:06.044996    5433 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 04:26:06.045016    5433 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 04:26:06.045045    5433 start.go:360] acquireMachinesLock for no-preload-752000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:26:06.045081    5433 start.go:364] duration metric: took 30.917µs to acquireMachinesLock for "no-preload-752000"
	I0819 04:26:06.045096    5433 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 04:26:06.045094    5433 start.go:93] Provisioning new machine with config: &{Name:no-preload-752000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:no-preload-752000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:26:06.045117    5433 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:26:06.052552    5433 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 04:26:06.055853    5433 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 04:26:06.055884    5433 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0819 04:26:06.055857    5433 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 04:26:06.055935    5433 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0819 04:26:06.055991    5433 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 04:26:06.056154    5433 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 04:26:06.056271    5433 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 04:26:06.068372    5433 start.go:159] libmachine.API.Create for "no-preload-752000" (driver="qemu2")
	I0819 04:26:06.068409    5433 client.go:168] LocalClient.Create starting
	I0819 04:26:06.068481    5433 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem
	I0819 04:26:06.068526    5433 main.go:141] libmachine: Decoding PEM data...
	I0819 04:26:06.068543    5433 main.go:141] libmachine: Parsing certificate...
	I0819 04:26:06.068580    5433 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/cert.pem
	I0819 04:26:06.068603    5433 main.go:141] libmachine: Decoding PEM data...
	I0819 04:26:06.068611    5433 main.go:141] libmachine: Parsing certificate...
	I0819 04:26:06.068952    5433 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19476-967/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19476-967/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:26:06.227011    5433 main.go:141] libmachine: Creating SSH key...
	I0819 04:26:06.329365    5433 main.go:141] libmachine: Creating Disk image...
	I0819 04:26:06.329388    5433 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:26:06.329576    5433 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19476-967/.minikube/machines/no-preload-752000/disk.qcow2.raw /Users/jenkins/minikube-integration/19476-967/.minikube/machines/no-preload-752000/disk.qcow2
	I0819 04:26:06.340843    5433 main.go:141] libmachine: STDOUT: 
	I0819 04:26:06.340872    5433 main.go:141] libmachine: STDERR: 
	I0819 04:26:06.340920    5433 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/no-preload-752000/disk.qcow2 +20000M
	I0819 04:26:06.350429    5433 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:26:06.350448    5433 main.go:141] libmachine: STDERR: 
	I0819 04:26:06.350463    5433 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19476-967/.minikube/machines/no-preload-752000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19476-967/.minikube/machines/no-preload-752000/disk.qcow2
	I0819 04:26:06.350468    5433 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:26:06.350481    5433 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:26:06.350507    5433 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/no-preload-752000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/no-preload-752000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/no-preload-752000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:33:2a:6f:36:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/no-preload-752000/disk.qcow2
	I0819 04:26:06.352854    5433 main.go:141] libmachine: STDOUT: 
	I0819 04:26:06.352900    5433 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:26:06.352934    5433 client.go:171] duration metric: took 284.523667ms to LocalClient.Create
	I0819 04:26:06.431675    5433 cache.go:162] opening:  /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I0819 04:26:06.437628    5433 cache.go:162] opening:  /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0
	I0819 04:26:06.458997    5433 cache.go:162] opening:  /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0
	I0819 04:26:06.480985    5433 cache.go:162] opening:  /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0819 04:26:06.491021    5433 cache.go:162] opening:  /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0
	I0819 04:26:06.547274    5433 cache.go:162] opening:  /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0819 04:26:06.576057    5433 cache.go:162] opening:  /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0819 04:26:06.684323    5433 cache.go:157] /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0819 04:26:06.684334    5433 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 639.696792ms
	I0819 04:26:06.684340    5433 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0819 04:26:08.353162    5433 start.go:128] duration metric: took 2.308039125s to createHost
	I0819 04:26:08.353237    5433 start.go:83] releasing machines lock for "no-preload-752000", held for 2.308178s
	W0819 04:26:08.353293    5433 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:26:08.362846    5433 out.go:177] * Deleting "no-preload-752000" in qemu2 ...
	W0819 04:26:08.391499    5433 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:26:08.391522    5433 start.go:729] Will try again in 5 seconds ...
	I0819 04:26:09.479441    5433 cache.go:157] /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 exists
	I0819 04:26:09.479470    5433 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0" -> "/Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0" took 3.434744s
	I0819 04:26:09.479484    5433 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0 -> /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 succeeded
	I0819 04:26:10.662690    5433 cache.go:157] /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 exists
	I0819 04:26:10.662740    5433 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0" -> "/Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0" took 4.618231208s
	I0819 04:26:10.662761    5433 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0 -> /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 succeeded
	I0819 04:26:10.682404    5433 cache.go:157] /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0819 04:26:10.682419    5433 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 4.637744542s
	I0819 04:26:10.682429    5433 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0819 04:26:10.828832    5433 cache.go:157] /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 exists
	I0819 04:26:10.828855    5433 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0" -> "/Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0" took 4.784134541s
	I0819 04:26:10.828867    5433 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0 -> /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 succeeded
	I0819 04:26:11.103737    5433 cache.go:157] /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 exists
	I0819 04:26:11.103760    5433 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0" -> "/Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0" took 5.059115917s
	I0819 04:26:11.103772    5433 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0 -> /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 succeeded
	I0819 04:26:13.391708    5433 start.go:360] acquireMachinesLock for no-preload-752000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:26:13.392258    5433 start.go:364] duration metric: took 465.875µs to acquireMachinesLock for "no-preload-752000"
	I0819 04:26:13.392333    5433 start.go:93] Provisioning new machine with config: &{Name:no-preload-752000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:no-preload-752000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:26:13.392628    5433 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:26:13.403246    5433 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 04:26:13.452526    5433 start.go:159] libmachine.API.Create for "no-preload-752000" (driver="qemu2")
	I0819 04:26:13.452725    5433 client.go:168] LocalClient.Create starting
	I0819 04:26:13.452856    5433 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem
	I0819 04:26:13.452928    5433 main.go:141] libmachine: Decoding PEM data...
	I0819 04:26:13.452948    5433 main.go:141] libmachine: Parsing certificate...
	I0819 04:26:13.453025    5433 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/cert.pem
	I0819 04:26:13.453071    5433 main.go:141] libmachine: Decoding PEM data...
	I0819 04:26:13.453088    5433 main.go:141] libmachine: Parsing certificate...
	I0819 04:26:13.453637    5433 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19476-967/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19476-967/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:26:13.747964    5433 main.go:141] libmachine: Creating SSH key...
	I0819 04:26:13.789451    5433 main.go:141] libmachine: Creating Disk image...
	I0819 04:26:13.789456    5433 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:26:13.789651    5433 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19476-967/.minikube/machines/no-preload-752000/disk.qcow2.raw /Users/jenkins/minikube-integration/19476-967/.minikube/machines/no-preload-752000/disk.qcow2
	I0819 04:26:13.798963    5433 main.go:141] libmachine: STDOUT: 
	I0819 04:26:13.798983    5433 main.go:141] libmachine: STDERR: 
	I0819 04:26:13.799032    5433 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/no-preload-752000/disk.qcow2 +20000M
	I0819 04:26:13.807216    5433 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:26:13.807241    5433 main.go:141] libmachine: STDERR: 
	I0819 04:26:13.807254    5433 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19476-967/.minikube/machines/no-preload-752000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19476-967/.minikube/machines/no-preload-752000/disk.qcow2
	I0819 04:26:13.807267    5433 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:26:13.807286    5433 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:26:13.807321    5433 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/no-preload-752000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/no-preload-752000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/no-preload-752000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:d3:5b:96:4f:0a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/no-preload-752000/disk.qcow2
	I0819 04:26:13.809071    5433 main.go:141] libmachine: STDOUT: 
	I0819 04:26:13.809086    5433 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:26:13.809099    5433 client.go:171] duration metric: took 356.372917ms to LocalClient.Create
	I0819 04:26:14.271824    5433 cache.go:157] /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0819 04:26:14.271880    5433 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 8.22727025s
	I0819 04:26:14.271905    5433 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0819 04:26:14.272001    5433 cache.go:87] Successfully saved all images to host disk.
	I0819 04:26:15.811298    5433 start.go:128] duration metric: took 2.418645708s to createHost
	I0819 04:26:15.811352    5433 start.go:83] releasing machines lock for "no-preload-752000", held for 2.419098458s
	W0819 04:26:15.811545    5433 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-752000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-752000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:26:15.828924    5433 out.go:201] 
	W0819 04:26:15.832925    5433 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:26:15.832940    5433 out.go:270] * 
	* 
	W0819 04:26:15.834018    5433 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:26:15.841888    5433 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-752000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-752000 -n no-preload-752000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-752000 -n no-preload-752000: exit status 7 (42.850084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-752000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-752000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-752000 create -f testdata/busybox.yaml: exit status 1 (28.842792ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-752000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-752000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-752000 -n no-preload-752000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-752000 -n no-preload-752000: exit status 7 (30.934625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-752000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-752000 -n no-preload-752000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-752000 -n no-preload-752000: exit status 7 (29.772375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-752000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-752000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-752000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-752000 describe deploy/metrics-server -n kube-system: exit status 1 (27.73975ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-752000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-752000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-752000 -n no-preload-752000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-752000 -n no-preload-752000: exit status 7 (29.555917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-752000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-752000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-752000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.333216208s)

                                                
                                                
-- stdout --
	* [no-preload-752000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-752000" primary control-plane node in "no-preload-752000" cluster
	* Restarting existing qemu2 VM for "no-preload-752000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-752000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:26:19.988497    5515 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:26:19.988653    5515 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:26:19.988656    5515 out.go:358] Setting ErrFile to fd 2...
	I0819 04:26:19.988659    5515 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:26:19.988796    5515 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:26:19.989794    5515 out.go:352] Setting JSON to false
	I0819 04:26:20.006308    5515 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3342,"bootTime":1724063437,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0819 04:26:20.006390    5515 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:26:20.008829    5515 out.go:177] * [no-preload-752000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:26:20.017436    5515 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 04:26:20.017463    5515 notify.go:220] Checking for updates...
	I0819 04:26:20.025430    5515 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	I0819 04:26:20.028514    5515 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:26:20.031439    5515 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:26:20.034423    5515 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	I0819 04:26:20.037548    5515 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:26:20.040703    5515 config.go:182] Loaded profile config "no-preload-752000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:26:20.040961    5515 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:26:20.045433    5515 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 04:26:20.052469    5515 start.go:297] selected driver: qemu2
	I0819 04:26:20.052479    5515 start.go:901] validating driver "qemu2" against &{Name:no-preload-752000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:no-preload-752000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:26:20.052542    5515 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:26:20.054780    5515 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 04:26:20.054825    5515 cni.go:84] Creating CNI manager for ""
	I0819 04:26:20.054833    5515 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:26:20.054856    5515 start.go:340] cluster config:
	{Name:no-preload-752000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-752000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:26:20.058145    5515 iso.go:125] acquiring lock: {Name:mk9bbf20f477d4c64990a7e4e7281f35cf7cfcc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:26:20.065322    5515 out.go:177] * Starting "no-preload-752000" primary control-plane node in "no-preload-752000" cluster
	I0819 04:26:20.069452    5515 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:26:20.069536    5515 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/no-preload-752000/config.json ...
	I0819 04:26:20.069580    5515 cache.go:107] acquiring lock: {Name:mkbd83dad8c41677b0447c838fbd7689b2eef1be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:26:20.069580    5515 cache.go:107] acquiring lock: {Name:mk3f3e925478163a3af4d89500c009678704e9a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:26:20.069603    5515 cache.go:107] acquiring lock: {Name:mkd2f671faf65141569ccbcf5178c3c872a9aed5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:26:20.069653    5515 cache.go:115] /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0819 04:26:20.069659    5515 cache.go:115] /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0819 04:26:20.069666    5515 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 97.542µs
	I0819 04:26:20.069667    5515 cache.go:107] acquiring lock: {Name:mkea9d38ee0d5cec6a06d1d601ae0c00c9e82155 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:26:20.069681    5515 cache.go:107] acquiring lock: {Name:mk20c07f2f461dacf0f1baa189a11ab5633354e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:26:20.069660    5515 cache.go:115] /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 exists
	I0819 04:26:20.069705    5515 cache.go:115] /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 exists
	I0819 04:26:20.069709    5515 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0" -> "/Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0" took 140.333µs
	I0819 04:26:20.069712    5515 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0 -> /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 succeeded
	I0819 04:26:20.069686    5515 cache.go:107] acquiring lock: {Name:mk7db3c7e20acf624ad3ae2a5f14770d5be8a25c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:26:20.069719    5515 cache.go:115] /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 exists
	I0819 04:26:20.069722    5515 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0" -> "/Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0" took 42.083µs
	I0819 04:26:20.069728    5515 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0 -> /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 succeeded
	I0819 04:26:20.069711    5515 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0" -> "/Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0" took 43.708µs
	I0819 04:26:20.069721    5515 cache.go:107] acquiring lock: {Name:mk8d81bb8c271ea2abea9b06b94f1e3ae8ef3dab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:26:20.069740    5515 cache.go:115] /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0819 04:26:20.069737    5515 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0 -> /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 succeeded
	I0819 04:26:20.069666    5515 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 63.333µs
	I0819 04:26:20.069750    5515 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0819 04:26:20.069675    5515 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0819 04:26:20.069587    5515 cache.go:107] acquiring lock: {Name:mk6bbecbd4317adaad536be0a5fd80b93128aad0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:26:20.069747    5515 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 64.792µs
	I0819 04:26:20.069778    5515 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0819 04:26:20.069785    5515 cache.go:115] /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0819 04:26:20.069791    5515 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 77µs
	I0819 04:26:20.069798    5515 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0819 04:26:20.069795    5515 cache.go:115] /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 exists
	I0819 04:26:20.069802    5515 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0" -> "/Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0" took 222.5µs
	I0819 04:26:20.069805    5515 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0 -> /Users/jenkins/minikube-integration/19476-967/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 succeeded
	I0819 04:26:20.069808    5515 cache.go:87] Successfully saved all images to host disk.
	I0819 04:26:20.069968    5515 start.go:360] acquireMachinesLock for no-preload-752000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:26:20.069993    5515 start.go:364] duration metric: took 19.917µs to acquireMachinesLock for "no-preload-752000"
	I0819 04:26:20.070001    5515 start.go:96] Skipping create...Using existing machine configuration
	I0819 04:26:20.070006    5515 fix.go:54] fixHost starting: 
	I0819 04:26:20.070112    5515 fix.go:112] recreateIfNeeded on no-preload-752000: state=Stopped err=<nil>
	W0819 04:26:20.070120    5515 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 04:26:20.077441    5515 out.go:177] * Restarting existing qemu2 VM for "no-preload-752000" ...
	I0819 04:26:20.081447    5515 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:26:20.081489    5515 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/no-preload-752000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/no-preload-752000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/no-preload-752000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:d3:5b:96:4f:0a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/no-preload-752000/disk.qcow2
	I0819 04:26:20.083287    5515 main.go:141] libmachine: STDOUT: 
	I0819 04:26:20.083306    5515 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:26:20.083332    5515 fix.go:56] duration metric: took 13.324875ms for fixHost
	I0819 04:26:20.083335    5515 start.go:83] releasing machines lock for "no-preload-752000", held for 13.33925ms
	W0819 04:26:20.083342    5515 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:26:20.083364    5515 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:26:20.083368    5515 start.go:729] Will try again in 5 seconds ...
	I0819 04:26:25.085602    5515 start.go:360] acquireMachinesLock for no-preload-752000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:26:25.207794    5515 start.go:364] duration metric: took 122.097959ms to acquireMachinesLock for "no-preload-752000"
	I0819 04:26:25.207906    5515 start.go:96] Skipping create...Using existing machine configuration
	I0819 04:26:25.207927    5515 fix.go:54] fixHost starting: 
	I0819 04:26:25.208605    5515 fix.go:112] recreateIfNeeded on no-preload-752000: state=Stopped err=<nil>
	W0819 04:26:25.208633    5515 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 04:26:25.221961    5515 out.go:177] * Restarting existing qemu2 VM for "no-preload-752000" ...
	I0819 04:26:25.233958    5515 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:26:25.234128    5515 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/no-preload-752000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/no-preload-752000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/no-preload-752000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:d3:5b:96:4f:0a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/no-preload-752000/disk.qcow2
	I0819 04:26:25.245435    5515 main.go:141] libmachine: STDOUT: 
	I0819 04:26:25.245763    5515 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:26:25.245854    5515 fix.go:56] duration metric: took 37.928917ms for fixHost
	I0819 04:26:25.245873    5515 start.go:83] releasing machines lock for "no-preload-752000", held for 38.054ms
	W0819 04:26:25.246113    5515 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-752000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-752000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:26:25.254879    5515 out.go:201] 
	W0819 04:26:25.260093    5515 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:26:25.260128    5515 out.go:270] * 
	* 
	W0819 04:26:25.262423    5515 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:26:25.275032    5515 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-752000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-752000 -n no-preload-752000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-752000 -n no-preload-752000: exit status 7 (62.400666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-752000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-102000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-102000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (9.950953125s)

                                                
                                                
-- stdout --
	* [embed-certs-102000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-102000" primary control-plane node in "embed-certs-102000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-102000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:26:22.761156    5525 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:26:22.761297    5525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:26:22.761300    5525 out.go:358] Setting ErrFile to fd 2...
	I0819 04:26:22.761302    5525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:26:22.761434    5525 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:26:22.762529    5525 out.go:352] Setting JSON to false
	I0819 04:26:22.778548    5525 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3345,"bootTime":1724063437,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0819 04:26:22.778619    5525 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:26:22.783077    5525 out.go:177] * [embed-certs-102000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:26:22.790085    5525 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 04:26:22.790127    5525 notify.go:220] Checking for updates...
	I0819 04:26:22.794448    5525 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	I0819 04:26:22.797089    5525 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:26:22.800089    5525 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:26:22.803077    5525 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	I0819 04:26:22.806068    5525 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:26:22.809468    5525 config.go:182] Loaded profile config "multinode-837000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:26:22.809544    5525 config.go:182] Loaded profile config "no-preload-752000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:26:22.809590    5525 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:26:22.814139    5525 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 04:26:22.821031    5525 start.go:297] selected driver: qemu2
	I0819 04:26:22.821037    5525 start.go:901] validating driver "qemu2" against <nil>
	I0819 04:26:22.821043    5525 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:26:22.823355    5525 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 04:26:22.828076    5525 out.go:177] * Automatically selected the socket_vmnet network
	I0819 04:26:22.831151    5525 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 04:26:22.831196    5525 cni.go:84] Creating CNI manager for ""
	I0819 04:26:22.831205    5525 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:26:22.831211    5525 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 04:26:22.831242    5525 start.go:340] cluster config:
	{Name:embed-certs-102000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-102000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:26:22.835001    5525 iso.go:125] acquiring lock: {Name:mk9bbf20f477d4c64990a7e4e7281f35cf7cfcc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:26:22.847002    5525 out.go:177] * Starting "embed-certs-102000" primary control-plane node in "embed-certs-102000" cluster
	I0819 04:26:22.851024    5525 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:26:22.851042    5525 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:26:22.851053    5525 cache.go:56] Caching tarball of preloaded images
	I0819 04:26:22.851124    5525 preload.go:172] Found /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:26:22.851130    5525 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:26:22.851212    5525 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/embed-certs-102000/config.json ...
	I0819 04:26:22.851227    5525 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/embed-certs-102000/config.json: {Name:mk7f072c1f732fd4aae1ab2d5a605ea0f332db25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:26:22.851693    5525 start.go:360] acquireMachinesLock for embed-certs-102000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:26:22.851729    5525 start.go:364] duration metric: took 30.083µs to acquireMachinesLock for "embed-certs-102000"
	I0819 04:26:22.851742    5525 start.go:93] Provisioning new machine with config: &{Name:embed-certs-102000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:embed-certs-102000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:26:22.851773    5525 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:26:22.860111    5525 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 04:26:22.878308    5525 start.go:159] libmachine.API.Create for "embed-certs-102000" (driver="qemu2")
	I0819 04:26:22.878344    5525 client.go:168] LocalClient.Create starting
	I0819 04:26:22.878420    5525 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem
	I0819 04:26:22.878451    5525 main.go:141] libmachine: Decoding PEM data...
	I0819 04:26:22.878460    5525 main.go:141] libmachine: Parsing certificate...
	I0819 04:26:22.878501    5525 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/cert.pem
	I0819 04:26:22.878532    5525 main.go:141] libmachine: Decoding PEM data...
	I0819 04:26:22.878545    5525 main.go:141] libmachine: Parsing certificate...
	I0819 04:26:22.879070    5525 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19476-967/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19476-967/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:26:23.099163    5525 main.go:141] libmachine: Creating SSH key...
	I0819 04:26:23.186468    5525 main.go:141] libmachine: Creating Disk image...
	I0819 04:26:23.186473    5525 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:26:23.186657    5525 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19476-967/.minikube/machines/embed-certs-102000/disk.qcow2.raw /Users/jenkins/minikube-integration/19476-967/.minikube/machines/embed-certs-102000/disk.qcow2
	I0819 04:26:23.195816    5525 main.go:141] libmachine: STDOUT: 
	I0819 04:26:23.195833    5525 main.go:141] libmachine: STDERR: 
	I0819 04:26:23.195871    5525 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/embed-certs-102000/disk.qcow2 +20000M
	I0819 04:26:23.203766    5525 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:26:23.203781    5525 main.go:141] libmachine: STDERR: 
	I0819 04:26:23.203796    5525 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19476-967/.minikube/machines/embed-certs-102000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19476-967/.minikube/machines/embed-certs-102000/disk.qcow2
	I0819 04:26:23.203800    5525 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:26:23.203812    5525 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:26:23.203837    5525 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/embed-certs-102000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/embed-certs-102000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/embed-certs-102000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:f8:bd:57:36:ff -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/embed-certs-102000/disk.qcow2
	I0819 04:26:23.205414    5525 main.go:141] libmachine: STDOUT: 
	I0819 04:26:23.205428    5525 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:26:23.205444    5525 client.go:171] duration metric: took 327.099416ms to LocalClient.Create
	I0819 04:26:25.207587    5525 start.go:128] duration metric: took 2.355825291s to createHost
	I0819 04:26:25.207675    5525 start.go:83] releasing machines lock for "embed-certs-102000", held for 2.35596175s
	W0819 04:26:25.207722    5525 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:26:25.229954    5525 out.go:177] * Deleting "embed-certs-102000" in qemu2 ...
	W0819 04:26:25.288985    5525 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:26:25.289030    5525 start.go:729] Will try again in 5 seconds ...
	I0819 04:26:30.291214    5525 start.go:360] acquireMachinesLock for embed-certs-102000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:26:30.291698    5525 start.go:364] duration metric: took 385.292µs to acquireMachinesLock for "embed-certs-102000"
	I0819 04:26:30.291905    5525 start.go:93] Provisioning new machine with config: &{Name:embed-certs-102000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:embed-certs-102000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:26:30.292177    5525 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:26:30.301761    5525 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 04:26:30.354006    5525 start.go:159] libmachine.API.Create for "embed-certs-102000" (driver="qemu2")
	I0819 04:26:30.354057    5525 client.go:168] LocalClient.Create starting
	I0819 04:26:30.354176    5525 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem
	I0819 04:26:30.354235    5525 main.go:141] libmachine: Decoding PEM data...
	I0819 04:26:30.354251    5525 main.go:141] libmachine: Parsing certificate...
	I0819 04:26:30.354325    5525 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/cert.pem
	I0819 04:26:30.354372    5525 main.go:141] libmachine: Decoding PEM data...
	I0819 04:26:30.354394    5525 main.go:141] libmachine: Parsing certificate...
	I0819 04:26:30.354931    5525 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19476-967/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19476-967/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:26:30.522034    5525 main.go:141] libmachine: Creating SSH key...
	I0819 04:26:30.615468    5525 main.go:141] libmachine: Creating Disk image...
	I0819 04:26:30.615474    5525 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:26:30.615663    5525 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19476-967/.minikube/machines/embed-certs-102000/disk.qcow2.raw /Users/jenkins/minikube-integration/19476-967/.minikube/machines/embed-certs-102000/disk.qcow2
	I0819 04:26:30.624792    5525 main.go:141] libmachine: STDOUT: 
	I0819 04:26:30.624810    5525 main.go:141] libmachine: STDERR: 
	I0819 04:26:30.624868    5525 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/embed-certs-102000/disk.qcow2 +20000M
	I0819 04:26:30.632695    5525 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:26:30.632709    5525 main.go:141] libmachine: STDERR: 
	I0819 04:26:30.632721    5525 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19476-967/.minikube/machines/embed-certs-102000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19476-967/.minikube/machines/embed-certs-102000/disk.qcow2
	I0819 04:26:30.632725    5525 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:26:30.632735    5525 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:26:30.632761    5525 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/embed-certs-102000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/embed-certs-102000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/embed-certs-102000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:5e:8a:41:9b:51 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/embed-certs-102000/disk.qcow2
	I0819 04:26:30.634388    5525 main.go:141] libmachine: STDOUT: 
	I0819 04:26:30.634428    5525 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:26:30.634439    5525 client.go:171] duration metric: took 280.3795ms to LocalClient.Create
	I0819 04:26:32.636608    5525 start.go:128] duration metric: took 2.344423708s to createHost
	I0819 04:26:32.636677    5525 start.go:83] releasing machines lock for "embed-certs-102000", held for 2.344984459s
	W0819 04:26:32.636964    5525 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-102000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-102000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:26:32.646469    5525 out.go:201] 
	W0819 04:26:32.655913    5525 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:26:32.655950    5525 out.go:270] * 
	* 
	W0819 04:26:32.659025    5525 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:26:32.668444    5525 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-102000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-102000 -n embed-certs-102000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-102000 -n embed-certs-102000: exit status 7 (64.992167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-102000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-752000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-752000 -n no-preload-752000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-752000 -n no-preload-752000: exit status 7 (32.1225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-752000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-752000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-752000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-752000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.833166ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-752000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-752000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-752000 -n no-preload-752000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-752000 -n no-preload-752000: exit status 7 (28.86ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-752000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-752000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-752000 -n no-preload-752000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-752000 -n no-preload-752000: exit status 7 (29.316958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-752000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-752000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-752000 --alsologtostderr -v=1: exit status 83 (43.136667ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-752000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-752000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:26:25.541923    5547 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:26:25.542064    5547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:26:25.542067    5547 out.go:358] Setting ErrFile to fd 2...
	I0819 04:26:25.542070    5547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:26:25.542187    5547 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:26:25.542399    5547 out.go:352] Setting JSON to false
	I0819 04:26:25.542408    5547 mustload.go:65] Loading cluster: no-preload-752000
	I0819 04:26:25.542578    5547 config.go:182] Loaded profile config "no-preload-752000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:26:25.546543    5547 out.go:177] * The control-plane node no-preload-752000 host is not running: state=Stopped
	I0819 04:26:25.553581    5547 out.go:177]   To start a cluster, run: "minikube start -p no-preload-752000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-752000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-752000 -n no-preload-752000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-752000 -n no-preload-752000: exit status 7 (29.876667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-752000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-752000 -n no-preload-752000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-752000 -n no-preload-752000: exit status 7 (29.66475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-752000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-030000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-030000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (10.246843375s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-030000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-030000" primary control-plane node in "default-k8s-diff-port-030000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-030000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:26:25.965949    5571 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:26:25.966200    5571 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:26:25.966206    5571 out.go:358] Setting ErrFile to fd 2...
	I0819 04:26:25.966208    5571 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:26:25.966374    5571 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:26:25.967689    5571 out.go:352] Setting JSON to false
	I0819 04:26:25.983921    5571 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3348,"bootTime":1724063437,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0819 04:26:25.983986    5571 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:26:25.989131    5571 out.go:177] * [default-k8s-diff-port-030000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:26:25.995027    5571 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 04:26:25.995042    5571 notify.go:220] Checking for updates...
	I0819 04:26:26.003096    5571 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	I0819 04:26:26.006056    5571 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:26:26.009134    5571 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:26:26.012152    5571 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	I0819 04:26:26.015076    5571 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:26:26.018479    5571 config.go:182] Loaded profile config "embed-certs-102000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:26:26.018544    5571 config.go:182] Loaded profile config "multinode-837000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:26:26.018605    5571 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:26:26.023159    5571 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 04:26:26.030107    5571 start.go:297] selected driver: qemu2
	I0819 04:26:26.030115    5571 start.go:901] validating driver "qemu2" against <nil>
	I0819 04:26:26.030122    5571 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:26:26.032406    5571 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 04:26:26.036217    5571 out.go:177] * Automatically selected the socket_vmnet network
	I0819 04:26:26.037865    5571 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 04:26:26.037895    5571 cni.go:84] Creating CNI manager for ""
	I0819 04:26:26.037901    5571 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:26:26.037906    5571 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 04:26:26.037931    5571 start.go:340] cluster config:
	{Name:default-k8s-diff-port-030000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-030000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:26:26.041520    5571 iso.go:125] acquiring lock: {Name:mk9bbf20f477d4c64990a7e4e7281f35cf7cfcc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:26:26.049110    5571 out.go:177] * Starting "default-k8s-diff-port-030000" primary control-plane node in "default-k8s-diff-port-030000" cluster
	I0819 04:26:26.053100    5571 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:26:26.053118    5571 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:26:26.053129    5571 cache.go:56] Caching tarball of preloaded images
	I0819 04:26:26.053210    5571 preload.go:172] Found /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:26:26.053216    5571 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:26:26.053293    5571 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/default-k8s-diff-port-030000/config.json ...
	I0819 04:26:26.053304    5571 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/default-k8s-diff-port-030000/config.json: {Name:mka8557f38da513ac3f5e79ee5627768eceb398b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:26:26.053547    5571 start.go:360] acquireMachinesLock for default-k8s-diff-port-030000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:26:26.053586    5571 start.go:364] duration metric: took 29.834µs to acquireMachinesLock for "default-k8s-diff-port-030000"
	I0819 04:26:26.053599    5571 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-030000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-030000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:26:26.053632    5571 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:26:26.062008    5571 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 04:26:26.079430    5571 start.go:159] libmachine.API.Create for "default-k8s-diff-port-030000" (driver="qemu2")
	I0819 04:26:26.079449    5571 client.go:168] LocalClient.Create starting
	I0819 04:26:26.079514    5571 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem
	I0819 04:26:26.079552    5571 main.go:141] libmachine: Decoding PEM data...
	I0819 04:26:26.079560    5571 main.go:141] libmachine: Parsing certificate...
	I0819 04:26:26.079597    5571 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/cert.pem
	I0819 04:26:26.079621    5571 main.go:141] libmachine: Decoding PEM data...
	I0819 04:26:26.079632    5571 main.go:141] libmachine: Parsing certificate...
	I0819 04:26:26.080083    5571 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19476-967/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19476-967/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:26:26.292093    5571 main.go:141] libmachine: Creating SSH key...
	I0819 04:26:26.560480    5571 main.go:141] libmachine: Creating Disk image...
	I0819 04:26:26.560487    5571 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:26:26.560731    5571 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19476-967/.minikube/machines/default-k8s-diff-port-030000/disk.qcow2.raw /Users/jenkins/minikube-integration/19476-967/.minikube/machines/default-k8s-diff-port-030000/disk.qcow2
	I0819 04:26:26.570524    5571 main.go:141] libmachine: STDOUT: 
	I0819 04:26:26.570541    5571 main.go:141] libmachine: STDERR: 
	I0819 04:26:26.570601    5571 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/default-k8s-diff-port-030000/disk.qcow2 +20000M
	I0819 04:26:26.578651    5571 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:26:26.578670    5571 main.go:141] libmachine: STDERR: 
	I0819 04:26:26.578686    5571 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19476-967/.minikube/machines/default-k8s-diff-port-030000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19476-967/.minikube/machines/default-k8s-diff-port-030000/disk.qcow2
	I0819 04:26:26.578691    5571 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:26:26.578699    5571 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:26:26.578739    5571 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/default-k8s-diff-port-030000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/default-k8s-diff-port-030000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/default-k8s-diff-port-030000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:42:f1:3e:e3:55 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/default-k8s-diff-port-030000/disk.qcow2
	I0819 04:26:26.580427    5571 main.go:141] libmachine: STDOUT: 
	I0819 04:26:26.580442    5571 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:26:26.580460    5571 client.go:171] duration metric: took 501.013583ms to LocalClient.Create
	I0819 04:26:28.582626    5571 start.go:128] duration metric: took 2.529004042s to createHost
	I0819 04:26:28.582669    5571 start.go:83] releasing machines lock for "default-k8s-diff-port-030000", held for 2.529105333s
	W0819 04:26:28.582731    5571 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:26:28.593951    5571 out.go:177] * Deleting "default-k8s-diff-port-030000" in qemu2 ...
	W0819 04:26:28.629558    5571 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:26:28.629584    5571 start.go:729] Will try again in 5 seconds ...
	I0819 04:26:33.631684    5571 start.go:360] acquireMachinesLock for default-k8s-diff-port-030000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:26:33.632117    5571 start.go:364] duration metric: took 359.25µs to acquireMachinesLock for "default-k8s-diff-port-030000"
	I0819 04:26:33.632290    5571 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-030000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-030000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:26:33.632569    5571 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:26:33.638245    5571 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 04:26:33.689640    5571 start.go:159] libmachine.API.Create for "default-k8s-diff-port-030000" (driver="qemu2")
	I0819 04:26:33.689820    5571 client.go:168] LocalClient.Create starting
	I0819 04:26:33.689951    5571 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem
	I0819 04:26:33.690004    5571 main.go:141] libmachine: Decoding PEM data...
	I0819 04:26:33.690019    5571 main.go:141] libmachine: Parsing certificate...
	I0819 04:26:33.690086    5571 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/cert.pem
	I0819 04:26:33.690120    5571 main.go:141] libmachine: Decoding PEM data...
	I0819 04:26:33.690136    5571 main.go:141] libmachine: Parsing certificate...
	I0819 04:26:33.690684    5571 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19476-967/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19476-967/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:26:34.061937    5571 main.go:141] libmachine: Creating SSH key...
	I0819 04:26:34.121354    5571 main.go:141] libmachine: Creating Disk image...
	I0819 04:26:34.121359    5571 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:26:34.121539    5571 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19476-967/.minikube/machines/default-k8s-diff-port-030000/disk.qcow2.raw /Users/jenkins/minikube-integration/19476-967/.minikube/machines/default-k8s-diff-port-030000/disk.qcow2
	I0819 04:26:34.130638    5571 main.go:141] libmachine: STDOUT: 
	I0819 04:26:34.130663    5571 main.go:141] libmachine: STDERR: 
	I0819 04:26:34.130719    5571 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/default-k8s-diff-port-030000/disk.qcow2 +20000M
	I0819 04:26:34.138736    5571 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:26:34.138750    5571 main.go:141] libmachine: STDERR: 
	I0819 04:26:34.138770    5571 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19476-967/.minikube/machines/default-k8s-diff-port-030000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19476-967/.minikube/machines/default-k8s-diff-port-030000/disk.qcow2
	I0819 04:26:34.138777    5571 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:26:34.138790    5571 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:26:34.138817    5571 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/default-k8s-diff-port-030000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/default-k8s-diff-port-030000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/default-k8s-diff-port-030000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:be:df:db:32:f5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/default-k8s-diff-port-030000/disk.qcow2
	I0819 04:26:34.140436    5571 main.go:141] libmachine: STDOUT: 
	I0819 04:26:34.140449    5571 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:26:34.140463    5571 client.go:171] duration metric: took 450.643125ms to LocalClient.Create
	I0819 04:26:36.142729    5571 start.go:128] duration metric: took 2.5101305s to createHost
	I0819 04:26:36.142797    5571 start.go:83] releasing machines lock for "default-k8s-diff-port-030000", held for 2.51068425s
	W0819 04:26:36.143140    5571 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-030000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-030000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:26:36.155554    5571 out.go:201] 
	W0819 04:26:36.159629    5571 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:26:36.159651    5571 out.go:270] * 
	* 
	W0819 04:26:36.162362    5571 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:26:36.171580    5571 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-030000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-030000 -n default-k8s-diff-port-030000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-030000 -n default-k8s-diff-port-030000: exit status 7 (64.663208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-030000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-102000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-102000 create -f testdata/busybox.yaml: exit status 1 (29.461291ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-102000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-102000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-102000 -n embed-certs-102000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-102000 -n embed-certs-102000: exit status 7 (29.376917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-102000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-102000 -n embed-certs-102000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-102000 -n embed-certs-102000: exit status 7 (29.412125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-102000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-102000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-102000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-102000 describe deploy/metrics-server -n kube-system: exit status 1 (26.711667ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-102000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-102000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-102000 -n embed-certs-102000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-102000 -n embed-certs-102000: exit status 7 (28.495083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-102000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-102000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-102000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.372321083s)

                                                
                                                
-- stdout --
	* [embed-certs-102000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-102000" primary control-plane node in "embed-certs-102000" cluster
	* Restarting existing qemu2 VM for "embed-certs-102000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-102000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:26:35.889826    5625 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:26:35.889944    5625 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:26:35.889947    5625 out.go:358] Setting ErrFile to fd 2...
	I0819 04:26:35.889949    5625 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:26:35.890078    5625 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:26:35.891038    5625 out.go:352] Setting JSON to false
	I0819 04:26:35.907259    5625 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3358,"bootTime":1724063437,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0819 04:26:35.907345    5625 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:26:35.912117    5625 out.go:177] * [embed-certs-102000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:26:35.918175    5625 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 04:26:35.918238    5625 notify.go:220] Checking for updates...
	I0819 04:26:35.925077    5625 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	I0819 04:26:35.928082    5625 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:26:35.931115    5625 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:26:35.934041    5625 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	I0819 04:26:35.937094    5625 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:26:35.940402    5625 config.go:182] Loaded profile config "embed-certs-102000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:26:35.940638    5625 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:26:35.945067    5625 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 04:26:35.952127    5625 start.go:297] selected driver: qemu2
	I0819 04:26:35.952134    5625 start.go:901] validating driver "qemu2" against &{Name:embed-certs-102000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:embed-certs-102000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:26:35.952200    5625 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:26:35.954391    5625 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 04:26:35.954418    5625 cni.go:84] Creating CNI manager for ""
	I0819 04:26:35.954425    5625 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:26:35.954456    5625 start.go:340] cluster config:
	{Name:embed-certs-102000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-102000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:26:35.957839    5625 iso.go:125] acquiring lock: {Name:mk9bbf20f477d4c64990a7e4e7281f35cf7cfcc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:26:35.965119    5625 out.go:177] * Starting "embed-certs-102000" primary control-plane node in "embed-certs-102000" cluster
	I0819 04:26:35.968953    5625 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:26:35.968968    5625 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:26:35.968977    5625 cache.go:56] Caching tarball of preloaded images
	I0819 04:26:35.969039    5625 preload.go:172] Found /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:26:35.969045    5625 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:26:35.969097    5625 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/embed-certs-102000/config.json ...
	I0819 04:26:35.969843    5625 start.go:360] acquireMachinesLock for embed-certs-102000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:26:36.142942    5625 start.go:364] duration metric: took 173.039167ms to acquireMachinesLock for "embed-certs-102000"
	I0819 04:26:36.143098    5625 start.go:96] Skipping create...Using existing machine configuration
	I0819 04:26:36.143135    5625 fix.go:54] fixHost starting: 
	I0819 04:26:36.143795    5625 fix.go:112] recreateIfNeeded on embed-certs-102000: state=Stopped err=<nil>
	W0819 04:26:36.143849    5625 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 04:26:36.155527    5625 out.go:177] * Restarting existing qemu2 VM for "embed-certs-102000" ...
	I0819 04:26:36.159574    5625 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:26:36.159945    5625 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/embed-certs-102000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/embed-certs-102000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/embed-certs-102000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:5e:8a:41:9b:51 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/embed-certs-102000/disk.qcow2
	I0819 04:26:36.170120    5625 main.go:141] libmachine: STDOUT: 
	I0819 04:26:36.170219    5625 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:26:36.170371    5625 fix.go:56] duration metric: took 27.241208ms for fixHost
	I0819 04:26:36.170392    5625 start.go:83] releasing machines lock for "embed-certs-102000", held for 27.36625ms
	W0819 04:26:36.170431    5625 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:26:36.170651    5625 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:26:36.170684    5625 start.go:729] Will try again in 5 seconds ...
	I0819 04:26:41.172827    5625 start.go:360] acquireMachinesLock for embed-certs-102000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:26:41.173188    5625 start.go:364] duration metric: took 278.834µs to acquireMachinesLock for "embed-certs-102000"
	I0819 04:26:41.173296    5625 start.go:96] Skipping create...Using existing machine configuration
	I0819 04:26:41.173354    5625 fix.go:54] fixHost starting: 
	I0819 04:26:41.174042    5625 fix.go:112] recreateIfNeeded on embed-certs-102000: state=Stopped err=<nil>
	W0819 04:26:41.174070    5625 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 04:26:41.184735    5625 out.go:177] * Restarting existing qemu2 VM for "embed-certs-102000" ...
	I0819 04:26:41.187566    5625 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:26:41.187751    5625 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/embed-certs-102000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/embed-certs-102000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/embed-certs-102000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:5e:8a:41:9b:51 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/embed-certs-102000/disk.qcow2
	I0819 04:26:41.197126    5625 main.go:141] libmachine: STDOUT: 
	I0819 04:26:41.197215    5625 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:26:41.197299    5625 fix.go:56] duration metric: took 23.9815ms for fixHost
	I0819 04:26:41.197325    5625 start.go:83] releasing machines lock for "embed-certs-102000", held for 24.112667ms
	W0819 04:26:41.197561    5625 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-102000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-102000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:26:41.206682    5625 out.go:201] 
	W0819 04:26:41.210659    5625 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:26:41.210688    5625 out.go:270] * 
	* 
	W0819 04:26:41.213481    5625 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:26:41.221669    5625 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-102000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-102000 -n embed-certs-102000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-102000 -n embed-certs-102000: exit status 7 (65.26375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-102000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-030000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-030000 create -f testdata/busybox.yaml: exit status 1 (29.44675ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-030000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-030000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-030000 -n default-k8s-diff-port-030000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-030000 -n default-k8s-diff-port-030000: exit status 7 (28.836166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-030000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-030000 -n default-k8s-diff-port-030000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-030000 -n default-k8s-diff-port-030000: exit status 7 (28.506833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-030000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-030000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-030000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-030000 describe deploy/metrics-server -n kube-system: exit status 1 (27.053541ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-030000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-030000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-030000 -n default-k8s-diff-port-030000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-030000 -n default-k8s-diff-port-030000: exit status 7 (28.970584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-030000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-030000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-030000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.1929915s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-030000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-030000" primary control-plane node in "default-k8s-diff-port-030000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-030000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-030000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:26:40.272985    5666 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:26:40.273132    5666 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:26:40.273136    5666 out.go:358] Setting ErrFile to fd 2...
	I0819 04:26:40.273138    5666 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:26:40.273270    5666 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:26:40.274272    5666 out.go:352] Setting JSON to false
	I0819 04:26:40.290220    5666 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3363,"bootTime":1724063437,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0819 04:26:40.290296    5666 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:26:40.294409    5666 out.go:177] * [default-k8s-diff-port-030000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:26:40.301351    5666 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 04:26:40.301424    5666 notify.go:220] Checking for updates...
	I0819 04:26:40.309318    5666 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	I0819 04:26:40.312327    5666 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:26:40.315285    5666 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:26:40.318337    5666 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	I0819 04:26:40.321233    5666 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:26:40.324561    5666 config.go:182] Loaded profile config "default-k8s-diff-port-030000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:26:40.324821    5666 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:26:40.329331    5666 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 04:26:40.336304    5666 start.go:297] selected driver: qemu2
	I0819 04:26:40.336312    5666 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-030000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-030000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:26:40.336389    5666 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:26:40.338719    5666 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 04:26:40.338762    5666 cni.go:84] Creating CNI manager for ""
	I0819 04:26:40.338770    5666 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:26:40.338791    5666 start.go:340] cluster config:
	{Name:default-k8s-diff-port-030000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-030000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:26:40.342386    5666 iso.go:125] acquiring lock: {Name:mk9bbf20f477d4c64990a7e4e7281f35cf7cfcc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:26:40.348314    5666 out.go:177] * Starting "default-k8s-diff-port-030000" primary control-plane node in "default-k8s-diff-port-030000" cluster
	I0819 04:26:40.352280    5666 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:26:40.352299    5666 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:26:40.352311    5666 cache.go:56] Caching tarball of preloaded images
	I0819 04:26:40.352360    5666 preload.go:172] Found /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:26:40.352366    5666 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:26:40.352426    5666 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/default-k8s-diff-port-030000/config.json ...
	I0819 04:26:40.352921    5666 start.go:360] acquireMachinesLock for default-k8s-diff-port-030000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:26:40.352949    5666 start.go:364] duration metric: took 22.375µs to acquireMachinesLock for "default-k8s-diff-port-030000"
	I0819 04:26:40.352959    5666 start.go:96] Skipping create...Using existing machine configuration
	I0819 04:26:40.352967    5666 fix.go:54] fixHost starting: 
	I0819 04:26:40.353089    5666 fix.go:112] recreateIfNeeded on default-k8s-diff-port-030000: state=Stopped err=<nil>
	W0819 04:26:40.353098    5666 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 04:26:40.357243    5666 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-030000" ...
	I0819 04:26:40.365226    5666 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:26:40.365260    5666 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/default-k8s-diff-port-030000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/default-k8s-diff-port-030000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/default-k8s-diff-port-030000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:be:df:db:32:f5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/default-k8s-diff-port-030000/disk.qcow2
	I0819 04:26:40.367455    5666 main.go:141] libmachine: STDOUT: 
	I0819 04:26:40.367476    5666 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:26:40.367505    5666 fix.go:56] duration metric: took 14.539417ms for fixHost
	I0819 04:26:40.367511    5666 start.go:83] releasing machines lock for "default-k8s-diff-port-030000", held for 14.557375ms
	W0819 04:26:40.367517    5666 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:26:40.367552    5666 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:26:40.367556    5666 start.go:729] Will try again in 5 seconds ...
	I0819 04:26:45.369814    5666 start.go:360] acquireMachinesLock for default-k8s-diff-port-030000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:26:45.370275    5666 start.go:364] duration metric: took 340.5µs to acquireMachinesLock for "default-k8s-diff-port-030000"
	I0819 04:26:45.370410    5666 start.go:96] Skipping create...Using existing machine configuration
	I0819 04:26:45.370431    5666 fix.go:54] fixHost starting: 
	I0819 04:26:45.371250    5666 fix.go:112] recreateIfNeeded on default-k8s-diff-port-030000: state=Stopped err=<nil>
	W0819 04:26:45.371277    5666 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 04:26:45.376966    5666 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-030000" ...
	I0819 04:26:45.390903    5666 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:26:45.391227    5666 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/default-k8s-diff-port-030000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/default-k8s-diff-port-030000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/default-k8s-diff-port-030000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:be:df:db:32:f5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/default-k8s-diff-port-030000/disk.qcow2
	I0819 04:26:45.401097    5666 main.go:141] libmachine: STDOUT: 
	I0819 04:26:45.401176    5666 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:26:45.401275    5666 fix.go:56] duration metric: took 30.845084ms for fixHost
	I0819 04:26:45.401303    5666 start.go:83] releasing machines lock for "default-k8s-diff-port-030000", held for 31.005875ms
	W0819 04:26:45.401522    5666 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-030000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-030000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:26:45.408859    5666 out.go:201] 
	W0819 04:26:45.413984    5666 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:26:45.414024    5666 out.go:270] * 
	* 
	W0819 04:26:45.416486    5666 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:26:45.424900    5666 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-030000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-030000 -n default-k8s-diff-port-030000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-030000 -n default-k8s-diff-port-030000: exit status 7 (70.593667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-030000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-102000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-102000 -n embed-certs-102000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-102000 -n embed-certs-102000: exit status 7 (31.948167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-102000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-102000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-102000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-102000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.791083ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-102000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-102000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-102000 -n embed-certs-102000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-102000 -n embed-certs-102000: exit status 7 (29.328208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-102000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-102000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-102000 -n embed-certs-102000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-102000 -n embed-certs-102000: exit status 7 (28.711542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-102000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-102000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-102000 --alsologtostderr -v=1: exit status 83 (41.852541ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-102000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-102000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:26:41.486903    5685 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:26:41.487065    5685 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:26:41.487069    5685 out.go:358] Setting ErrFile to fd 2...
	I0819 04:26:41.487071    5685 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:26:41.487212    5685 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:26:41.487431    5685 out.go:352] Setting JSON to false
	I0819 04:26:41.487441    5685 mustload.go:65] Loading cluster: embed-certs-102000
	I0819 04:26:41.487624    5685 config.go:182] Loaded profile config "embed-certs-102000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:26:41.492378    5685 out.go:177] * The control-plane node embed-certs-102000 host is not running: state=Stopped
	I0819 04:26:41.496410    5685 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-102000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-102000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-102000 -n embed-certs-102000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-102000 -n embed-certs-102000: exit status 7 (29.029958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-102000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-102000 -n embed-certs-102000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-102000 -n embed-certs-102000: exit status 7 (28.59975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-102000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (10.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-260000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-260000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (9.96605925s)

                                                
                                                
-- stdout --
	* [newest-cni-260000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-260000" primary control-plane node in "newest-cni-260000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-260000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:26:41.805714    5702 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:26:41.805821    5702 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:26:41.805824    5702 out.go:358] Setting ErrFile to fd 2...
	I0819 04:26:41.805826    5702 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:26:41.805956    5702 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:26:41.807202    5702 out.go:352] Setting JSON to false
	I0819 04:26:41.823221    5702 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3364,"bootTime":1724063437,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0819 04:26:41.823291    5702 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:26:41.828183    5702 out.go:177] * [newest-cni-260000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:26:41.835388    5702 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 04:26:41.835434    5702 notify.go:220] Checking for updates...
	I0819 04:26:41.841359    5702 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	I0819 04:26:41.844461    5702 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:26:41.845898    5702 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:26:41.849282    5702 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	I0819 04:26:41.852323    5702 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:26:41.855783    5702 config.go:182] Loaded profile config "default-k8s-diff-port-030000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:26:41.855841    5702 config.go:182] Loaded profile config "multinode-837000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:26:41.855894    5702 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:26:41.860322    5702 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 04:26:41.867198    5702 start.go:297] selected driver: qemu2
	I0819 04:26:41.867206    5702 start.go:901] validating driver "qemu2" against <nil>
	I0819 04:26:41.867212    5702 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:26:41.869429    5702 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0819 04:26:41.869455    5702 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0819 04:26:41.877322    5702 out.go:177] * Automatically selected the socket_vmnet network
	I0819 04:26:41.878736    5702 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0819 04:26:41.878768    5702 cni.go:84] Creating CNI manager for ""
	I0819 04:26:41.878776    5702 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:26:41.878780    5702 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 04:26:41.878812    5702 start.go:340] cluster config:
	{Name:newest-cni-260000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-260000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:26:41.882319    5702 iso.go:125] acquiring lock: {Name:mk9bbf20f477d4c64990a7e4e7281f35cf7cfcc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:26:41.889357    5702 out.go:177] * Starting "newest-cni-260000" primary control-plane node in "newest-cni-260000" cluster
	I0819 04:26:41.893296    5702 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:26:41.893309    5702 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:26:41.893318    5702 cache.go:56] Caching tarball of preloaded images
	I0819 04:26:41.893381    5702 preload.go:172] Found /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:26:41.893387    5702 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:26:41.893441    5702 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/newest-cni-260000/config.json ...
	I0819 04:26:41.893451    5702 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/newest-cni-260000/config.json: {Name:mk2aec9af4929ea654e9b1947b4e9f436cd7c62a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:26:41.893714    5702 start.go:360] acquireMachinesLock for newest-cni-260000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:26:41.893747    5702 start.go:364] duration metric: took 26.833µs to acquireMachinesLock for "newest-cni-260000"
	I0819 04:26:41.893761    5702 start.go:93] Provisioning new machine with config: &{Name:newest-cni-260000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:newest-cni-260000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:26:41.893795    5702 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:26:41.902279    5702 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 04:26:41.919831    5702 start.go:159] libmachine.API.Create for "newest-cni-260000" (driver="qemu2")
	I0819 04:26:41.919871    5702 client.go:168] LocalClient.Create starting
	I0819 04:26:41.919949    5702 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem
	I0819 04:26:41.919977    5702 main.go:141] libmachine: Decoding PEM data...
	I0819 04:26:41.919985    5702 main.go:141] libmachine: Parsing certificate...
	I0819 04:26:41.920025    5702 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/cert.pem
	I0819 04:26:41.920050    5702 main.go:141] libmachine: Decoding PEM data...
	I0819 04:26:41.920056    5702 main.go:141] libmachine: Parsing certificate...
	I0819 04:26:41.920503    5702 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19476-967/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19476-967/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:26:42.111273    5702 main.go:141] libmachine: Creating SSH key...
	I0819 04:26:42.176166    5702 main.go:141] libmachine: Creating Disk image...
	I0819 04:26:42.176171    5702 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:26:42.176362    5702 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19476-967/.minikube/machines/newest-cni-260000/disk.qcow2.raw /Users/jenkins/minikube-integration/19476-967/.minikube/machines/newest-cni-260000/disk.qcow2
	I0819 04:26:42.185565    5702 main.go:141] libmachine: STDOUT: 
	I0819 04:26:42.185583    5702 main.go:141] libmachine: STDERR: 
	I0819 04:26:42.185623    5702 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/newest-cni-260000/disk.qcow2 +20000M
	I0819 04:26:42.193399    5702 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:26:42.193414    5702 main.go:141] libmachine: STDERR: 
	I0819 04:26:42.193429    5702 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19476-967/.minikube/machines/newest-cni-260000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19476-967/.minikube/machines/newest-cni-260000/disk.qcow2
	I0819 04:26:42.193432    5702 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:26:42.193443    5702 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:26:42.193468    5702 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/newest-cni-260000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/newest-cni-260000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/newest-cni-260000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:23:01:3c:3a:e7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/newest-cni-260000/disk.qcow2
	I0819 04:26:42.194995    5702 main.go:141] libmachine: STDOUT: 
	I0819 04:26:42.195010    5702 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:26:42.195028    5702 client.go:171] duration metric: took 275.157ms to LocalClient.Create
	I0819 04:26:44.197181    5702 start.go:128] duration metric: took 2.303393458s to createHost
	I0819 04:26:44.197238    5702 start.go:83] releasing machines lock for "newest-cni-260000", held for 2.303509708s
	W0819 04:26:44.197291    5702 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:26:44.208305    5702 out.go:177] * Deleting "newest-cni-260000" in qemu2 ...
	W0819 04:26:44.241883    5702 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:26:44.241905    5702 start.go:729] Will try again in 5 seconds ...
	I0819 04:26:49.244151    5702 start.go:360] acquireMachinesLock for newest-cni-260000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:26:49.244692    5702 start.go:364] duration metric: took 411.417µs to acquireMachinesLock for "newest-cni-260000"
	I0819 04:26:49.244867    5702 start.go:93] Provisioning new machine with config: &{Name:newest-cni-260000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:newest-cni-260000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:26:49.245183    5702 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:26:49.254716    5702 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 04:26:49.305067    5702 start.go:159] libmachine.API.Create for "newest-cni-260000" (driver="qemu2")
	I0819 04:26:49.305113    5702 client.go:168] LocalClient.Create starting
	I0819 04:26:49.305234    5702 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/ca.pem
	I0819 04:26:49.305308    5702 main.go:141] libmachine: Decoding PEM data...
	I0819 04:26:49.305328    5702 main.go:141] libmachine: Parsing certificate...
	I0819 04:26:49.305394    5702 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19476-967/.minikube/certs/cert.pem
	I0819 04:26:49.305440    5702 main.go:141] libmachine: Decoding PEM data...
	I0819 04:26:49.305463    5702 main.go:141] libmachine: Parsing certificate...
	I0819 04:26:49.306129    5702 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19476-967/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19476-967/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:26:49.589476    5702 main.go:141] libmachine: Creating SSH key...
	I0819 04:26:49.674498    5702 main.go:141] libmachine: Creating Disk image...
	I0819 04:26:49.674505    5702 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:26:49.674704    5702 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19476-967/.minikube/machines/newest-cni-260000/disk.qcow2.raw /Users/jenkins/minikube-integration/19476-967/.minikube/machines/newest-cni-260000/disk.qcow2
	I0819 04:26:49.683960    5702 main.go:141] libmachine: STDOUT: 
	I0819 04:26:49.683980    5702 main.go:141] libmachine: STDERR: 
	I0819 04:26:49.684020    5702 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/newest-cni-260000/disk.qcow2 +20000M
	I0819 04:26:49.691927    5702 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:26:49.691944    5702 main.go:141] libmachine: STDERR: 
	I0819 04:26:49.691956    5702 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19476-967/.minikube/machines/newest-cni-260000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19476-967/.minikube/machines/newest-cni-260000/disk.qcow2
	I0819 04:26:49.691961    5702 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:26:49.691972    5702 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:26:49.692003    5702 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/newest-cni-260000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/newest-cni-260000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/newest-cni-260000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:20:81:da:c9:59 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/newest-cni-260000/disk.qcow2
	I0819 04:26:49.693609    5702 main.go:141] libmachine: STDOUT: 
	I0819 04:26:49.693632    5702 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:26:49.693646    5702 client.go:171] duration metric: took 388.533667ms to LocalClient.Create
	I0819 04:26:51.695847    5702 start.go:128] duration metric: took 2.450657291s to createHost
	I0819 04:26:51.695897    5702 start.go:83] releasing machines lock for "newest-cni-260000", held for 2.451210583s
	W0819 04:26:51.696329    5702 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-260000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-260000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:26:51.712022    5702 out.go:201] 
	W0819 04:26:51.715126    5702 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:26:51.715152    5702 out.go:270] * 
	* 
	W0819 04:26:51.718197    5702 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:26:51.731874    5702 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-260000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-260000 -n newest-cni-260000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-260000 -n newest-cni-260000: exit status 7 (67.501416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-260000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (10.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-030000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-030000 -n default-k8s-diff-port-030000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-030000 -n default-k8s-diff-port-030000: exit status 7 (31.488791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-030000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-030000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-030000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-030000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.65275ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-030000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-030000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-030000 -n default-k8s-diff-port-030000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-030000 -n default-k8s-diff-port-030000: exit status 7 (29.459125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-030000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-030000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-030000 -n default-k8s-diff-port-030000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-030000 -n default-k8s-diff-port-030000: exit status 7 (29.4725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-030000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-030000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-030000 --alsologtostderr -v=1: exit status 83 (40.078459ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-030000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-030000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:26:45.695574    5724 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:26:45.695728    5724 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:26:45.695731    5724 out.go:358] Setting ErrFile to fd 2...
	I0819 04:26:45.695733    5724 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:26:45.695847    5724 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:26:45.696059    5724 out.go:352] Setting JSON to false
	I0819 04:26:45.696067    5724 mustload.go:65] Loading cluster: default-k8s-diff-port-030000
	I0819 04:26:45.696295    5724 config.go:182] Loaded profile config "default-k8s-diff-port-030000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:26:45.700135    5724 out.go:177] * The control-plane node default-k8s-diff-port-030000 host is not running: state=Stopped
	I0819 04:26:45.704002    5724 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-030000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-030000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-030000 -n default-k8s-diff-port-030000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-030000 -n default-k8s-diff-port-030000: exit status 7 (28.6015ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-030000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-030000 -n default-k8s-diff-port-030000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-030000 -n default-k8s-diff-port-030000: exit status 7 (28.227708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-030000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-260000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-260000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.181343625s)

                                                
                                                
-- stdout --
	* [newest-cni-260000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-260000" primary control-plane node in "newest-cni-260000" cluster
	* Restarting existing qemu2 VM for "newest-cni-260000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-260000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:26:56.121308    5779 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:26:56.121446    5779 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:26:56.121449    5779 out.go:358] Setting ErrFile to fd 2...
	I0819 04:26:56.121452    5779 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:26:56.121578    5779 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:26:56.122569    5779 out.go:352] Setting JSON to false
	I0819 04:26:56.138831    5779 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3379,"bootTime":1724063437,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0819 04:26:56.138910    5779 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:26:56.143229    5779 out.go:177] * [newest-cni-260000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:26:56.150173    5779 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 04:26:56.150228    5779 notify.go:220] Checking for updates...
	I0819 04:26:56.157130    5779 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	I0819 04:26:56.160094    5779 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:26:56.163218    5779 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:26:56.166082    5779 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	I0819 04:26:56.169126    5779 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:26:56.172391    5779 config.go:182] Loaded profile config "newest-cni-260000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:26:56.172660    5779 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:26:56.177049    5779 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 04:26:56.184133    5779 start.go:297] selected driver: qemu2
	I0819 04:26:56.184142    5779 start.go:901] validating driver "qemu2" against &{Name:newest-cni-260000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:newest-cni-260000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:26:56.184213    5779 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:26:56.186438    5779 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0819 04:26:56.186463    5779 cni.go:84] Creating CNI manager for ""
	I0819 04:26:56.186470    5779 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:26:56.186500    5779 start.go:340] cluster config:
	{Name:newest-cni-260000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-260000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:26:56.189949    5779 iso.go:125] acquiring lock: {Name:mk9bbf20f477d4c64990a7e4e7281f35cf7cfcc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:26:56.198115    5779 out.go:177] * Starting "newest-cni-260000" primary control-plane node in "newest-cni-260000" cluster
	I0819 04:26:56.201045    5779 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:26:56.201060    5779 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:26:56.201073    5779 cache.go:56] Caching tarball of preloaded images
	I0819 04:26:56.201129    5779 preload.go:172] Found /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:26:56.201135    5779 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:26:56.201191    5779 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/newest-cni-260000/config.json ...
	I0819 04:26:56.201719    5779 start.go:360] acquireMachinesLock for newest-cni-260000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:26:56.201756    5779 start.go:364] duration metric: took 31µs to acquireMachinesLock for "newest-cni-260000"
	I0819 04:26:56.201766    5779 start.go:96] Skipping create...Using existing machine configuration
	I0819 04:26:56.201773    5779 fix.go:54] fixHost starting: 
	I0819 04:26:56.201899    5779 fix.go:112] recreateIfNeeded on newest-cni-260000: state=Stopped err=<nil>
	W0819 04:26:56.201907    5779 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 04:26:56.205229    5779 out.go:177] * Restarting existing qemu2 VM for "newest-cni-260000" ...
	I0819 04:26:56.213171    5779 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:26:56.213216    5779 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/newest-cni-260000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/newest-cni-260000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/newest-cni-260000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:20:81:da:c9:59 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/newest-cni-260000/disk.qcow2
	I0819 04:26:56.215312    5779 main.go:141] libmachine: STDOUT: 
	I0819 04:26:56.215332    5779 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:26:56.215364    5779 fix.go:56] duration metric: took 13.590958ms for fixHost
	I0819 04:26:56.215369    5779 start.go:83] releasing machines lock for "newest-cni-260000", held for 13.608167ms
	W0819 04:26:56.215375    5779 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:26:56.215406    5779 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:26:56.215411    5779 start.go:729] Will try again in 5 seconds ...
	I0819 04:27:01.217474    5779 start.go:360] acquireMachinesLock for newest-cni-260000: {Name:mk4b83e6ce0ed6377f3d4e75e1bfe0035fc0d17e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:27:01.217786    5779 start.go:364] duration metric: took 247.625µs to acquireMachinesLock for "newest-cni-260000"
	I0819 04:27:01.217945    5779 start.go:96] Skipping create...Using existing machine configuration
	I0819 04:27:01.217982    5779 fix.go:54] fixHost starting: 
	I0819 04:27:01.218696    5779 fix.go:112] recreateIfNeeded on newest-cni-260000: state=Stopped err=<nil>
	W0819 04:27:01.218722    5779 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 04:27:01.228238    5779 out.go:177] * Restarting existing qemu2 VM for "newest-cni-260000" ...
	I0819 04:27:01.232278    5779 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:27:01.232471    5779 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19476-967/.minikube/machines/newest-cni-260000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/newest-cni-260000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19476-967/.minikube/machines/newest-cni-260000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:20:81:da:c9:59 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19476-967/.minikube/machines/newest-cni-260000/disk.qcow2
	I0819 04:27:01.241633    5779 main.go:141] libmachine: STDOUT: 
	I0819 04:27:01.241728    5779 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:27:01.241839    5779 fix.go:56] duration metric: took 23.873375ms for fixHost
	I0819 04:27:01.241871    5779 start.go:83] releasing machines lock for "newest-cni-260000", held for 24.054667ms
	W0819 04:27:01.242162    5779 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-260000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-260000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:27:01.248239    5779 out.go:201] 
	W0819 04:27:01.252293    5779 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:27:01.252317    5779 out.go:270] * 
	* 
	W0819 04:27:01.255061    5779 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:27:01.262270    5779 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-260000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-260000 -n newest-cni-260000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-260000 -n newest-cni-260000: exit status 7 (69.172042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-260000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-260000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-260000 -n newest-cni-260000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-260000 -n newest-cni-260000: exit status 7 (29.663958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-260000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-260000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-260000 --alsologtostderr -v=1: exit status 83 (38.768916ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-260000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-260000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:27:01.444511    5793 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:27:01.444680    5793 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:27:01.444683    5793 out.go:358] Setting ErrFile to fd 2...
	I0819 04:27:01.444685    5793 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:27:01.444803    5793 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 04:27:01.445020    5793 out.go:352] Setting JSON to false
	I0819 04:27:01.445028    5793 mustload.go:65] Loading cluster: newest-cni-260000
	I0819 04:27:01.445209    5793 config.go:182] Loaded profile config "newest-cni-260000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:27:01.448422    5793 out.go:177] * The control-plane node newest-cni-260000 host is not running: state=Stopped
	I0819 04:27:01.452297    5793 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-260000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-260000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-260000 -n newest-cni-260000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-260000 -n newest-cni-260000: exit status 7 (30.353417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-260000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-260000 -n newest-cni-260000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-260000 -n newest-cni-260000: exit status 7 (29.8745ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-260000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (156/274)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.31.0/json-events 10.04
13 TestDownloadOnly/v1.31.0/preload-exists 0
16 TestDownloadOnly/v1.31.0/kubectl 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.08
18 TestDownloadOnly/v1.31.0/DeleteAll 0.1
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.1
21 TestBinaryMirror 0.29
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 198.05
29 TestAddons/serial/Volcano 38.61
31 TestAddons/serial/GCPAuth/Namespaces 0.09
33 TestAddons/parallel/Registry 13.55
34 TestAddons/parallel/Ingress 18.02
35 TestAddons/parallel/InspektorGadget 10.29
36 TestAddons/parallel/MetricsServer 6.27
39 TestAddons/parallel/CSI 38.42
40 TestAddons/parallel/Headlamp 15.61
41 TestAddons/parallel/CloudSpanner 5.21
42 TestAddons/parallel/LocalPath 51.94
43 TestAddons/parallel/NvidiaDevicePlugin 6.19
44 TestAddons/parallel/Yakd 10.22
45 TestAddons/StoppedEnableDisable 12.43
53 TestHyperKitDriverInstallOrUpdate 9.93
56 TestErrorSpam/setup 33.15
57 TestErrorSpam/start 0.33
58 TestErrorSpam/status 0.24
59 TestErrorSpam/pause 0.64
60 TestErrorSpam/unpause 0.58
61 TestErrorSpam/stop 55.26
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 71.89
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 32.97
68 TestFunctional/serial/KubeContext 0.03
69 TestFunctional/serial/KubectlGetPods 0.05
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.67
73 TestFunctional/serial/CacheCmd/cache/add_local 1.16
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.03
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.07
77 TestFunctional/serial/CacheCmd/cache/cache_reload 0.61
78 TestFunctional/serial/CacheCmd/cache/delete 0.07
79 TestFunctional/serial/MinikubeKubectlCmd 0.73
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.02
81 TestFunctional/serial/ExtraConfig 34.94
82 TestFunctional/serial/ComponentHealth 0.04
83 TestFunctional/serial/LogsCmd 0.64
84 TestFunctional/serial/LogsFileCmd 0.65
85 TestFunctional/serial/InvalidService 4.31
87 TestFunctional/parallel/ConfigCmd 0.22
88 TestFunctional/parallel/DashboardCmd 8.98
89 TestFunctional/parallel/DryRun 0.25
90 TestFunctional/parallel/InternationalLanguage 0.12
91 TestFunctional/parallel/StatusCmd 0.24
96 TestFunctional/parallel/AddonsCmd 0.1
97 TestFunctional/parallel/PersistentVolumeClaim 25.93
99 TestFunctional/parallel/SSHCmd 0.12
100 TestFunctional/parallel/CpCmd 0.39
102 TestFunctional/parallel/FileSync 0.06
103 TestFunctional/parallel/CertSync 0.4
107 TestFunctional/parallel/NodeLabels 0.07
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.08
111 TestFunctional/parallel/License 0.39
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 1.29
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.1
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
119 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
120 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
121 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
123 TestFunctional/parallel/ServiceCmd/DeployApp 6.09
124 TestFunctional/parallel/ServiceCmd/List 0.31
125 TestFunctional/parallel/ServiceCmd/JSONOutput 0.29
126 TestFunctional/parallel/ServiceCmd/HTTPS 0.11
127 TestFunctional/parallel/ServiceCmd/Format 0.09
128 TestFunctional/parallel/ServiceCmd/URL 0.1
129 TestFunctional/parallel/ProfileCmd/profile_not_create 0.13
130 TestFunctional/parallel/ProfileCmd/profile_list 0.11
131 TestFunctional/parallel/ProfileCmd/profile_json_output 0.11
132 TestFunctional/parallel/MountCmd/any-port 6.15
133 TestFunctional/parallel/MountCmd/specific-port 1.04
134 TestFunctional/parallel/MountCmd/VerifyCleanup 0.89
135 TestFunctional/parallel/Version/short 0.04
136 TestFunctional/parallel/Version/components 0.16
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.08
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.12
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.1
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.08
141 TestFunctional/parallel/ImageCommands/ImageBuild 1.88
142 TestFunctional/parallel/ImageCommands/Setup 1.95
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.54
144 TestFunctional/parallel/DockerEnv/bash 0.39
145 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
146 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.06
147 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
148 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.38
149 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.17
150 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.14
151 TestFunctional/parallel/ImageCommands/ImageRemove 0.13
152 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.2
153 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.17
154 TestFunctional/delete_echo-server_images 0.03
155 TestFunctional/delete_my-image_image 0.01
156 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 176.8
161 TestMultiControlPlane/serial/DeployApp 3.95
162 TestMultiControlPlane/serial/PingHostFromPods 0.73
163 TestMultiControlPlane/serial/AddWorkerNode 56.12
164 TestMultiControlPlane/serial/NodeLabels 0.16
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.25
166 TestMultiControlPlane/serial/CopyFile 4.23
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 77.88
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.05
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 3.32
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.2
212 TestMainNoArgs 0.03
259 TestStoppedBinaryUpgrade/Setup 1.75
271 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
275 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
276 TestNoKubernetes/serial/ProfileList 31.39
277 TestNoKubernetes/serial/Stop 2.07
279 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
291 TestStoppedBinaryUpgrade/MinikubeLogs 0.62
294 TestStartStop/group/old-k8s-version/serial/Stop 3.23
295 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
305 TestStartStop/group/no-preload/serial/Stop 3.73
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.11
318 TestStartStop/group/embed-certs/serial/Stop 2.79
319 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
323 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.67
324 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
336 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
338 TestStartStop/group/newest-cni/serial/Stop 4.09
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-584000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-584000: exit status 85 (101.553541ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-584000 | jenkins | v1.33.1 | 19 Aug 24 03:34 PDT |          |
	|         | -p download-only-584000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 03:34:51
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 03:34:51.858962    1436 out.go:345] Setting OutFile to fd 1 ...
	I0819 03:34:51.859098    1436 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 03:34:51.859102    1436 out.go:358] Setting ErrFile to fd 2...
	I0819 03:34:51.859104    1436 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 03:34:51.859218    1436 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	W0819 03:34:51.859313    1436 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19476-967/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19476-967/.minikube/config/config.json: no such file or directory
	I0819 03:34:51.860525    1436 out.go:352] Setting JSON to true
	I0819 03:34:51.877683    1436 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":254,"bootTime":1724063437,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0819 03:34:51.877761    1436 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 03:34:51.882433    1436 out.go:97] [download-only-584000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 03:34:51.882559    1436 notify.go:220] Checking for updates...
	W0819 03:34:51.882595    1436 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball: no such file or directory
	I0819 03:34:51.886441    1436 out.go:169] MINIKUBE_LOCATION=19476
	I0819 03:34:51.889439    1436 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	I0819 03:34:51.894480    1436 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 03:34:51.898499    1436 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 03:34:51.901430    1436 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	W0819 03:34:51.907438    1436 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0819 03:34:51.907668    1436 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 03:34:51.912320    1436 out.go:97] Using the qemu2 driver based on user configuration
	I0819 03:34:51.912338    1436 start.go:297] selected driver: qemu2
	I0819 03:34:51.912341    1436 start.go:901] validating driver "qemu2" against <nil>
	I0819 03:34:51.912417    1436 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 03:34:51.915504    1436 out.go:169] Automatically selected the socket_vmnet network
	I0819 03:34:51.921158    1436 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0819 03:34:51.921255    1436 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 03:34:51.921299    1436 cni.go:84] Creating CNI manager for ""
	I0819 03:34:51.921316    1436 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0819 03:34:51.921369    1436 start.go:340] cluster config:
	{Name:download-only-584000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-584000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 03:34:51.926567    1436 iso.go:125] acquiring lock: {Name:mk9bbf20f477d4c64990a7e4e7281f35cf7cfcc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 03:34:51.931476    1436 out.go:97] Downloading VM boot image ...
	I0819 03:34:51.931508    1436 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19476-967/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso
	I0819 03:34:56.631368    1436 out.go:97] Starting "download-only-584000" primary control-plane node in "download-only-584000" cluster
	I0819 03:34:56.631392    1436 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0819 03:34:56.694181    1436 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0819 03:34:56.694203    1436 cache.go:56] Caching tarball of preloaded images
	I0819 03:34:56.694384    1436 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0819 03:34:56.699535    1436 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0819 03:34:56.699545    1436 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0819 03:34:56.786722    1436 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0819 03:35:02.397869    1436 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0819 03:35:02.398330    1436 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0819 03:35:03.102925    1436 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0819 03:35:03.103120    1436 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/download-only-584000/config.json ...
	I0819 03:35:03.103135    1436 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/download-only-584000/config.json: {Name:mk98fb5cfaef9e8b199d72380c0c2b4f1741ce36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 03:35:03.103308    1436 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0819 03:35:03.103488    1436 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19476-967/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0819 03:35:03.440938    1436 out.go:193] 
	W0819 03:35:03.448950    1436 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19476-967/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1070af960 0x1070af960 0x1070af960 0x1070af960 0x1070af960 0x1070af960 0x1070af960] Decompressors:map[bz2:0x1400080f5b0 gz:0x1400080f5b8 tar:0x1400080f560 tar.bz2:0x1400080f570 tar.gz:0x1400080f580 tar.xz:0x1400080f590 tar.zst:0x1400080f5a0 tbz2:0x1400080f570 tgz:0x1400080f580 txz:0x1400080f590 tzst:0x1400080f5a0 xz:0x1400080f5c0 zip:0x1400080f5d0 zst:0x1400080f5c8] Getters:map[file:0x140017548a0 http:0x1400083a280 https:0x1400083a370] Dir:false ProgressListe
ner:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0819 03:35:03.448975    1436 out_reason.go:110] 
	W0819 03:35:03.456887    1436 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 03:35:03.459900    1436 out.go:193] 
	
	
	* The control-plane node download-only-584000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-584000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-584000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (10.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-372000 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-372000 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=qemu2 : (10.0443405s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (10.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-372000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-372000: exit status 85 (80.242ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-584000 | jenkins | v1.33.1 | 19 Aug 24 03:34 PDT |                     |
	|         | -p download-only-584000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 19 Aug 24 03:35 PDT | 19 Aug 24 03:35 PDT |
	| delete  | -p download-only-584000        | download-only-584000 | jenkins | v1.33.1 | 19 Aug 24 03:35 PDT | 19 Aug 24 03:35 PDT |
	| start   | -o=json --download-only        | download-only-372000 | jenkins | v1.33.1 | 19 Aug 24 03:35 PDT |                     |
	|         | -p download-only-372000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 03:35:03
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 03:35:03.880377    1460 out.go:345] Setting OutFile to fd 1 ...
	I0819 03:35:03.880509    1460 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 03:35:03.880512    1460 out.go:358] Setting ErrFile to fd 2...
	I0819 03:35:03.880515    1460 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 03:35:03.880642    1460 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 03:35:03.881666    1460 out.go:352] Setting JSON to true
	I0819 03:35:03.897550    1460 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":266,"bootTime":1724063437,"procs":446,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0819 03:35:03.897610    1460 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 03:35:03.901713    1460 out.go:97] [download-only-372000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 03:35:03.901812    1460 notify.go:220] Checking for updates...
	I0819 03:35:03.905656    1460 out.go:169] MINIKUBE_LOCATION=19476
	I0819 03:35:03.908659    1460 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	I0819 03:35:03.912646    1460 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 03:35:03.915665    1460 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 03:35:03.918593    1460 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	W0819 03:35:03.924641    1460 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0819 03:35:03.924789    1460 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 03:35:03.927541    1460 out.go:97] Using the qemu2 driver based on user configuration
	I0819 03:35:03.927549    1460 start.go:297] selected driver: qemu2
	I0819 03:35:03.927552    1460 start.go:901] validating driver "qemu2" against <nil>
	I0819 03:35:03.927591    1460 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 03:35:03.930563    1460 out.go:169] Automatically selected the socket_vmnet network
	I0819 03:35:03.935783    1460 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0819 03:35:03.935874    1460 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 03:35:03.935888    1460 cni.go:84] Creating CNI manager for ""
	I0819 03:35:03.935897    1460 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 03:35:03.935902    1460 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 03:35:03.935939    1460 start.go:340] cluster config:
	{Name:download-only-372000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-372000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 03:35:03.939269    1460 iso.go:125] acquiring lock: {Name:mk9bbf20f477d4c64990a7e4e7281f35cf7cfcc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 03:35:03.942643    1460 out.go:97] Starting "download-only-372000" primary control-plane node in "download-only-372000" cluster
	I0819 03:35:03.942651    1460 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 03:35:03.999204    1460 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 03:35:03.999217    1460 cache.go:56] Caching tarball of preloaded images
	I0819 03:35:03.999357    1460 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 03:35:04.004444    1460 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0819 03:35:04.004450    1460 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	I0819 03:35:04.095822    1460 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4?checksum=md5:90c22abece392b762c0b4e45be981bb4 -> /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 03:35:09.872974    1460 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	I0819 03:35:09.873157    1460 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19476-967/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	I0819 03:35:10.394828    1460 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 03:35:10.395018    1460 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/download-only-372000/config.json ...
	I0819 03:35:10.395033    1460 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/download-only-372000/config.json: {Name:mkcac9ef902a5bedf7300484b3ba9066202ea952 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 03:35:10.395274    1460 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 03:35:10.395397    1460 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19476-967/.minikube/cache/darwin/arm64/v1.31.0/kubectl
	
	
	* The control-plane node download-only-372000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-372000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-372000
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.29s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-363000 --alsologtostderr --binary-mirror http://127.0.0.1:49310 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-363000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-363000
--- PASS: TestBinaryMirror (0.29s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-758000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-758000: exit status 85 (61.16725ms)

                                                
                                                
-- stdout --
	* Profile "addons-758000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-758000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-758000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-758000: exit status 85 (57.368208ms)

                                                
                                                
-- stdout --
	* Profile "addons-758000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-758000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (198.05s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-758000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-darwin-arm64 start -p addons-758000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (3m18.051441875s)
--- PASS: TestAddons/Setup (198.05s)

                                                
                                    
x
+
TestAddons/serial/Volcano (38.61s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:905: volcano-admission stabilized in 7.495208ms
addons_test.go:897: volcano-scheduler stabilized in 7.582791ms
addons_test.go:913: volcano-controller stabilized in 7.654541ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-7q26v" [46ed6e73-3acf-4637-847f-d8ae5de24dc8] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.011197083s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-6klpf" [3f1ec6be-0822-4889-9cea-a9ef0edf8520] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.011400333s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-nk92g" [a011de26-45c2-40cf-9e16-40027b1f1cf6] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.010245375s
addons_test.go:932: (dbg) Run:  kubectl --context addons-758000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-758000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-758000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [8f86457f-ddc2-426a-8640-6f8b48705a93] Pending
helpers_test.go:344: "test-job-nginx-0" [8f86457f-ddc2-426a-8640-6f8b48705a93] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [8f86457f-ddc2-426a-8640-6f8b48705a93] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.009156125s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-arm64 -p addons-758000 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-darwin-arm64 -p addons-758000 addons disable volcano --alsologtostderr -v=1: (10.328927125s)
--- PASS: TestAddons/serial/Volcano (38.61s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-758000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-758000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.55s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.315334ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-5wrfz" [ed63e0a9-9b26-4bc3-98e9-c66793af671c] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.0059855s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-wmmq7" [d30a3655-d5ef-430e-a28b-a4ebc43e2002] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004130584s
addons_test.go:342: (dbg) Run:  kubectl --context addons-758000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-758000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-758000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.255227875s)
addons_test.go:361: (dbg) Run:  out/minikube-darwin-arm64 -p addons-758000 ip
addons_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 -p addons-758000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (13.55s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-758000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-758000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-758000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [f9aa6589-f291-4359-bbbe-d8e33cd7f3a9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [f9aa6589-f291-4359-bbbe-d8e33cd7f3a9] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.011107291s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-arm64 -p addons-758000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-758000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-arm64 -p addons-758000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p addons-758000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-darwin-arm64 -p addons-758000 addons disable ingress-dns --alsologtostderr -v=1: (1.123108042s)
addons_test.go:313: (dbg) Run:  out/minikube-darwin-arm64 -p addons-758000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-arm64 -p addons-758000 addons disable ingress --alsologtostderr -v=1: (7.260724958s)
--- PASS: TestAddons/parallel/Ingress (18.02s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.29s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-27lz7" [116ae026-cd3d-4530-8755-10fe1d13d518] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.006474375s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-758000
addons_test.go:851: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-758000: (5.281785834s)
--- PASS: TestAddons/parallel/InspektorGadget (10.29s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.451917ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-szm8x" [dc6d3bcc-8f00-4fe8-9400-1b91b6d1b090] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.005533s
addons_test.go:417: (dbg) Run:  kubectl --context addons-758000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-arm64 -p addons-758000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.27s)

                                                
                                    
x
+
TestAddons/parallel/CSI (38.42s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 3.037ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-758000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-758000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-758000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-758000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [10b53372-d465-4d34-b061-3c7ff6f4ee9f] Pending
helpers_test.go:344: "task-pv-pod" [10b53372-d465-4d34-b061-3c7ff6f4ee9f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [10b53372-d465-4d34-b061-3c7ff6f4ee9f] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.006976709s
addons_test.go:590: (dbg) Run:  kubectl --context addons-758000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-758000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-758000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-758000 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-758000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-758000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-758000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-758000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-758000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-758000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
2024/08/19 03:39:41 [DEBUG] GET http://192.168.105.2:5000
helpers_test.go:394: (dbg) Run:  kubectl --context addons-758000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-758000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-758000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-758000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-758000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-758000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-758000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-758000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-758000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-758000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [509996a2-8621-40fe-b244-c6d58875be7a] Pending
helpers_test.go:344: "task-pv-pod-restore" [509996a2-8621-40fe-b244-c6d58875be7a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [509996a2-8621-40fe-b244-c6d58875be7a] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.0038695s
addons_test.go:632: (dbg) Run:  kubectl --context addons-758000 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-758000 delete pod task-pv-pod-restore: (1.107218542s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-758000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-758000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-arm64 -p addons-758000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-arm64 -p addons-758000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.144177667s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-arm64 -p addons-758000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (38.42s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.61s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-758000 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-6zfcv" [8427e631-77a9-474b-8770-478c80e8e29c] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-6zfcv" [8427e631-77a9-474b-8770-478c80e8e29c] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.007290709s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-arm64 -p addons-758000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-darwin-arm64 -p addons-758000 addons disable headlamp --alsologtostderr -v=1: (5.259522792s)
--- PASS: TestAddons/parallel/Headlamp (15.61s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.21s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-c4bc9b5f8-9qs8q" [7b5dde7c-2509-46b3-9cbf-eb282c2e7b3f] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.01035025s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-758000
--- PASS: TestAddons/parallel/CloudSpanner (5.21s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.94s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-758000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-758000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-758000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-758000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-758000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-758000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-758000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-758000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [8b94754e-2d77-403f-9d5c-c0f586ecb063] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [8b94754e-2d77-403f-9d5c-c0f586ecb063] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [8b94754e-2d77-403f-9d5c-c0f586ecb063] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.006120042s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-758000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-arm64 -p addons-758000 ssh "cat /opt/local-path-provisioner/pvc-cfb93729-3b9e-4cfa-9bdd-b6e9c5396c96_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-758000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-758000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 -p addons-758000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-darwin-arm64 -p addons-758000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.437634958s)
--- PASS: TestAddons/parallel/LocalPath (51.94s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.19s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-znznk" [afa14558-8b4b-42f9-9356-bf470e3e4935] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.01334475s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-758000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.19s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.22s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-n8nbf" [d654b836-d6ec-4cbf-8494-067e6d3bf8ab] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.005007792s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-arm64 -p addons-758000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-arm64 -p addons-758000 addons disable yakd --alsologtostderr -v=1: (5.214427875s)
--- PASS: TestAddons/parallel/Yakd (10.22s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.43s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-758000
addons_test.go:174: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-758000: (12.241292791s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-758000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-758000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-758000
--- PASS: TestAddons/StoppedEnableDisable (12.43s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (9.93s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (9.93s)

                                                
                                    
x
+
TestErrorSpam/setup (33.15s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-441000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-441000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-441000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-441000 --driver=qemu2 : (33.150266375s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.0."
--- PASS: TestErrorSpam/setup (33.15s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-441000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-441000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-441000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-441000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-441000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-441000 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.24s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-441000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-441000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-441000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-441000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-441000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-441000 status
--- PASS: TestErrorSpam/status (0.24s)

                                                
                                    
x
+
TestErrorSpam/pause (0.64s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-441000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-441000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-441000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-441000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-441000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-441000 pause
--- PASS: TestErrorSpam/pause (0.64s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.58s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-441000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-441000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-441000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-441000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-441000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-441000 unpause
--- PASS: TestErrorSpam/unpause (0.58s)

                                                
                                    
x
+
TestErrorSpam/stop (55.26s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-441000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-441000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-441000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-441000 stop: (3.193086625s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-441000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-441000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-441000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-441000 stop: (26.04061925s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-441000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-441000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-441000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-441000 stop: (26.027618458s)
--- PASS: TestErrorSpam/stop (55.26s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19476-967/.minikube/files/etc/test/nested/copy/1434/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (71.89s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-522000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
E0819 03:43:32.778485    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/addons-758000/client.crt: no such file or directory" logger="UnhandledError"
E0819 03:43:32.788365    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/addons-758000/client.crt: no such file or directory" logger="UnhandledError"
E0819 03:43:32.801915    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/addons-758000/client.crt: no such file or directory" logger="UnhandledError"
E0819 03:43:32.825409    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/addons-758000/client.crt: no such file or directory" logger="UnhandledError"
E0819 03:43:32.868955    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/addons-758000/client.crt: no such file or directory" logger="UnhandledError"
E0819 03:43:32.952445    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/addons-758000/client.crt: no such file or directory" logger="UnhandledError"
E0819 03:43:33.115921    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/addons-758000/client.crt: no such file or directory" logger="UnhandledError"
E0819 03:43:33.439382    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/addons-758000/client.crt: no such file or directory" logger="UnhandledError"
E0819 03:43:34.083202    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/addons-758000/client.crt: no such file or directory" logger="UnhandledError"
E0819 03:43:35.366871    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/addons-758000/client.crt: no such file or directory" logger="UnhandledError"
E0819 03:43:37.930620    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/addons-758000/client.crt: no such file or directory" logger="UnhandledError"
E0819 03:43:43.054286    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/addons-758000/client.crt: no such file or directory" logger="UnhandledError"
E0819 03:43:53.297917    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/addons-758000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-darwin-arm64 start -p functional-522000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (1m11.890910041s)
--- PASS: TestFunctional/serial/StartWithProxy (71.89s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (32.97s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-522000 --alsologtostderr -v=8
E0819 03:44:13.781298    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/addons-758000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-darwin-arm64 start -p functional-522000 --alsologtostderr -v=8: (32.97302575s)
functional_test.go:663: soft start took 32.973392834s for "functional-522000" cluster.
--- PASS: TestFunctional/serial/SoftStart (32.97s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-522000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.67s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-522000 cache add registry.k8s.io/pause:3.1: (1.046583958s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.67s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-522000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local2577228354/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 cache add minikube-local-cache-test:functional-522000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 cache delete minikube-local-cache-test:functional-522000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-522000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.61s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-522000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (63.005625ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.61s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.73s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 kubectl -- --context functional-522000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.73s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.02s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-522000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-522000 get pods: (1.023237083s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.02s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.94s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-522000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0819 03:44:54.742504    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/addons-758000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-darwin-arm64 start -p functional-522000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.936599417s)
functional_test.go:761: restart took 34.936693209s for "functional-522000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (34.94s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-522000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.64s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd1393551686/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.65s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.31s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-522000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-522000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-522000: exit status 115 (105.897625ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:32024 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-522000 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-522000 delete -f testdata/invalidsvc.yaml: (1.110875542s)
--- PASS: TestFunctional/serial/InvalidService (4.31s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-522000 config get cpus: exit status 14 (30.394542ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-522000 config get cpus: exit status 14 (30.929125ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-522000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-522000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2119: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.98s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-522000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-522000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (130.786208ms)

                                                
                                                
-- stdout --
	* [functional-522000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 03:46:00.416862    2106 out.go:345] Setting OutFile to fd 1 ...
	I0819 03:46:00.416994    2106 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 03:46:00.417002    2106 out.go:358] Setting ErrFile to fd 2...
	I0819 03:46:00.417004    2106 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 03:46:00.417132    2106 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 03:46:00.418268    2106 out.go:352] Setting JSON to false
	I0819 03:46:00.434994    2106 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":923,"bootTime":1724063437,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0819 03:46:00.435069    2106 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 03:46:00.439315    2106 out.go:177] * [functional-522000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 03:46:00.446291    2106 notify.go:220] Checking for updates...
	I0819 03:46:00.450287    2106 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 03:46:00.457216    2106 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	I0819 03:46:00.465180    2106 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 03:46:00.473142    2106 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 03:46:00.477238    2106 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	I0819 03:46:00.481252    2106 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 03:46:00.485514    2106 config.go:182] Loaded profile config "functional-522000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 03:46:00.485780    2106 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 03:46:00.489180    2106 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 03:46:00.494167    2106 start.go:297] selected driver: qemu2
	I0819 03:46:00.494173    2106 start.go:901] validating driver "qemu2" against &{Name:functional-522000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-522000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 03:46:00.494221    2106 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 03:46:00.500276    2106 out.go:201] 
	W0819 03:46:00.505186    2106 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0819 03:46:00.510244    2106 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-522000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-522000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-522000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (123.939959ms)

                                                
                                                
-- stdout --
	* [functional-522000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 03:46:00.289681    2102 out.go:345] Setting OutFile to fd 1 ...
	I0819 03:46:00.289786    2102 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 03:46:00.289789    2102 out.go:358] Setting ErrFile to fd 2...
	I0819 03:46:00.289791    2102 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 03:46:00.289935    2102 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
	I0819 03:46:00.291263    2102 out.go:352] Setting JSON to false
	I0819 03:46:00.309586    2102 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":923,"bootTime":1724063437,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0819 03:46:00.309697    2102 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 03:46:00.315279    2102 out.go:177] * [functional-522000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0819 03:46:00.323291    2102 notify.go:220] Checking for updates...
	I0819 03:46:00.327227    2102 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 03:46:00.334228    2102 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	I0819 03:46:00.338207    2102 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 03:46:00.342223    2102 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 03:46:00.343262    2102 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	I0819 03:46:00.346229    2102 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 03:46:00.350579    2102 config.go:182] Loaded profile config "functional-522000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 03:46:00.350848    2102 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 03:46:00.354102    2102 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0819 03:46:00.364666    2102 start.go:297] selected driver: qemu2
	I0819 03:46:00.364677    2102 start.go:901] validating driver "qemu2" against &{Name:functional-522000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-522000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 03:46:00.364743    2102 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 03:46:00.372229    2102 out.go:201] 
	W0819 03:46:00.375996    2102 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0819 03:46:00.380196    2102 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [41b24b5b-ac22-47b5-8d37-1632d8a278dd] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.011204625s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-522000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-522000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-522000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-522000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c7069604-a22a-4a55-ab73-7321b344b9bf] Pending
helpers_test.go:344: "sp-pod" [c7069604-a22a-4a55-ab73-7321b344b9bf] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c7069604-a22a-4a55-ab73-7321b344b9bf] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.010621417s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-522000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-522000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-522000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [eab1e48e-c8d4-4b2e-8540-61fcc8258448] Pending
helpers_test.go:344: "sp-pod" [eab1e48e-c8d4-4b2e-8540-61fcc8258448] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [eab1e48e-c8d4-4b2e-8540-61fcc8258448] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.006490542s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-522000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.93s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 ssh -n functional-522000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 cp functional-522000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd2470642943/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 ssh -n functional-522000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 ssh -n functional-522000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1434/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 ssh "sudo cat /etc/test/nested/copy/1434/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1434.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 ssh "sudo cat /etc/ssl/certs/1434.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1434.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 ssh "sudo cat /usr/share/ca-certificates/1434.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/14342.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 ssh "sudo cat /etc/ssl/certs/14342.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/14342.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 ssh "sudo cat /usr/share/ca-certificates/14342.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-522000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-522000 ssh "sudo systemctl is-active crio": exit status 1 (78.253083ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-522000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-522000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-522000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1960: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-522000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-522000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-522000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [87e76b57-60bf-4d6e-9f3d-7f8f5dd1c05f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [87e76b57-60bf-4d6e-9f3d-7f8f5dd1c05f] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.004381666s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-522000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.74.8 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-522000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-522000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-522000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-29qvb" [cb6fcc8f-7cce-4417-94d4-5eeac95617c6] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-29qvb" [cb6fcc8f-7cce-4417-94d4-5eeac95617c6] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.010107459s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 service list -o json
functional_test.go:1494: Took "289.133167ms" to run "out/minikube-darwin-arm64 -p functional-522000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.105.4:32288
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.105.4:32288
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "81.063583ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "33.576875ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "80.4ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "33.39375ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-522000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port973919382/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1724064351948842000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port973919382/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1724064351948842000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port973919382/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1724064351948842000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port973919382/001/test-1724064351948842000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-522000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (56.296959ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 19 10:45 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 19 10:45 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 19 10:45 test-1724064351948842000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 ssh cat /mount-9p/test-1724064351948842000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-522000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [3e0b4f98-8d6b-4fa3-9ecd-11c9a4f174b6] Pending
helpers_test.go:344: "busybox-mount" [3e0b4f98-8d6b-4fa3-9ecd-11c9a4f174b6] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [3e0b4f98-8d6b-4fa3-9ecd-11c9a4f174b6] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [3e0b4f98-8d6b-4fa3-9ecd-11c9a4f174b6] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.01077375s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-522000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-522000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port973919382/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-522000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port3461367497/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-522000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (58.207375ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-522000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port3461367497/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-522000 ssh "sudo umount -f /mount-9p": exit status 1 (59.174375ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-522000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-522000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port3461367497/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-522000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2424060481/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-522000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2424060481/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-522000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2424060481/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-522000 ssh "findmnt -T" /mount1: exit status 1 (66.173875ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-522000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-522000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2424060481/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-522000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2424060481/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-522000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2424060481/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-522000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-522000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-522000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-522000 image ls --format short --alsologtostderr:
I0819 03:46:12.754184    2259 out.go:345] Setting OutFile to fd 1 ...
I0819 03:46:12.754302    2259 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 03:46:12.754304    2259 out.go:358] Setting ErrFile to fd 2...
I0819 03:46:12.754307    2259 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 03:46:12.754442    2259 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
I0819 03:46:12.754900    2259 config.go:182] Loaded profile config "functional-522000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 03:46:12.754959    2259 config.go:182] Loaded profile config "functional-522000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 03:46:12.755745    2259 ssh_runner.go:195] Run: systemctl --version
I0819 03:46:12.755755    2259 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/functional-522000/id_rsa Username:docker}
I0819 03:46:12.778679    2259 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-522000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/coredns/coredns             | v1.11.1           | 2437cf7621777 | 57.4MB |
| docker.io/library/minikube-local-cache-test | functional-522000 | f7700c53bba3c | 30B    |
| docker.io/library/nginx                     | latest            | a9dfdba8b7190 | 193MB  |
| registry.k8s.io/kube-controller-manager     | v1.31.0           | fcb0683e6bdbd | 85.9MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/kube-scheduler              | v1.31.0           | fbbbd428abb4d | 66MB   |
| docker.io/kicbase/echo-server               | functional-522000 | ce2d2cda2d858 | 4.78MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/kube-proxy                  | v1.31.0           | 71d55d66fd4ee | 94.7MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/nginx                     | alpine            | 70594c812316a | 47MB   |
| registry.k8s.io/kube-apiserver              | v1.31.0           | cd0f0ae0ec9e0 | 91.5MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-522000 image ls --format table --alsologtostderr:
I0819 03:46:12.938371    2274 out.go:345] Setting OutFile to fd 1 ...
I0819 03:46:12.938528    2274 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 03:46:12.938531    2274 out.go:358] Setting ErrFile to fd 2...
I0819 03:46:12.938533    2274 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 03:46:12.938675    2274 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
I0819 03:46:12.939170    2274 config.go:182] Loaded profile config "functional-522000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 03:46:12.939227    2274 config.go:182] Loaded profile config "functional-522000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 03:46:12.940135    2274 ssh_runner.go:195] Run: systemctl --version
I0819 03:46:12.940145    2274 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/functional-522000/id_rsa Username:docker}
I0819 03:46:12.986004    2274 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-522000 image ls --format json --alsologtostderr:
[{"id":"a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"57400000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"f7700c53bba3cebee213585e4a4765a85aa27003d3b502adac5b660d6e520f24","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-522000"],"size":"30"},{"id":"fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"66000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af1
7a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"94700000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-522000"],"size":"4780000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["do
cker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"91500000"},{"id":"fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"85900000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"
size":"514000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-522000 image ls --format json --alsologtostderr:
I0819 03:46:12.835705    2265 out.go:345] Setting OutFile to fd 1 ...
I0819 03:46:12.835848    2265 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 03:46:12.835853    2265 out.go:358] Setting ErrFile to fd 2...
I0819 03:46:12.835856    2265 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 03:46:12.835986    2265 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
I0819 03:46:12.836437    2265 config.go:182] Loaded profile config "functional-522000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 03:46:12.836515    2265 config.go:182] Loaded profile config "functional-522000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 03:46:12.837264    2265 ssh_runner.go:195] Run: systemctl --version
I0819 03:46:12.837275    2265 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/functional-522000/id_rsa Username:docker}
I0819 03:46:12.878725    2265 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-522000 image ls --format yaml --alsologtostderr:
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: f7700c53bba3cebee213585e4a4765a85aa27003d3b502adac5b660d6e520f24
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-522000
size: "30"
- id: fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "85900000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-522000
size: "4780000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "66000000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "57400000"
- id: 70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "91500000"
- id: 71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "94700000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-522000 image ls --format yaml --alsologtostderr:
I0819 03:46:12.754130    2260 out.go:345] Setting OutFile to fd 1 ...
I0819 03:46:12.754294    2260 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 03:46:12.754298    2260 out.go:358] Setting ErrFile to fd 2...
I0819 03:46:12.754300    2260 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 03:46:12.754478    2260 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
I0819 03:46:12.754923    2260 config.go:182] Loaded profile config "functional-522000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 03:46:12.754980    2260 config.go:182] Loaded profile config "functional-522000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 03:46:12.756098    2260 ssh_runner.go:195] Run: systemctl --version
I0819 03:46:12.756105    2260 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/functional-522000/id_rsa Username:docker}
I0819 03:46:12.778680    2260 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
W0819 03:46:12.795254    2260 root.go:91] failed to log command end to audit: failed to find a log row with id equals to fa3f04c3-4b7e-4cff-ba46-35f354419c0b
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-522000 ssh pgrep buildkitd: exit status 1 (59.001666ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 image build -t localhost/my-image:functional-522000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-arm64 -p functional-522000 image build -t localhost/my-image:functional-522000 testdata/build --alsologtostderr: (1.749039s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-522000 image build -t localhost/my-image:functional-522000 testdata/build --alsologtostderr:
I0819 03:46:12.893657    2268 out.go:345] Setting OutFile to fd 1 ...
I0819 03:46:12.893882    2268 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 03:46:12.893885    2268 out.go:358] Setting ErrFile to fd 2...
I0819 03:46:12.893888    2268 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 03:46:12.894015    2268 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19476-967/.minikube/bin
I0819 03:46:12.894444    2268 config.go:182] Loaded profile config "functional-522000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 03:46:12.895130    2268 config.go:182] Loaded profile config "functional-522000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 03:46:12.896057    2268 ssh_runner.go:195] Run: systemctl --version
I0819 03:46:12.896069    2268 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19476-967/.minikube/machines/functional-522000/id_rsa Username:docker}
I0819 03:46:12.968954    2268 build_images.go:161] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2511577769.tar
I0819 03:46:12.969036    2268 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0819 03:46:12.974570    2268 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2511577769.tar
I0819 03:46:12.985793    2268 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2511577769.tar: stat -c "%s %y" /var/lib/minikube/build/build.2511577769.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2511577769.tar': No such file or directory
I0819 03:46:12.985826    2268 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2511577769.tar --> /var/lib/minikube/build/build.2511577769.tar (3072 bytes)
I0819 03:46:13.023273    2268 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2511577769
I0819 03:46:13.032121    2268 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2511577769 -xf /var/lib/minikube/build/build.2511577769.tar
I0819 03:46:13.035634    2268 docker.go:360] Building image: /var/lib/minikube/build/build.2511577769
I0819 03:46:13.035686    2268 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-522000 /var/lib/minikube/build/build.2511577769
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.3s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers done
#8 writing image sha256:02fb32e208d1af654974ca268a6301f6479c9e52f7da5c6e2b31e69650ec1f24 done
#8 naming to localhost/my-image:functional-522000 done
#8 DONE 0.0s
I0819 03:46:14.597481    2268 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-522000 /var/lib/minikube/build/build.2511577769: (1.560326084s)
I0819 03:46:14.597550    2268 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2511577769
I0819 03:46:14.601465    2268 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2511577769.tar
I0819 03:46:14.604612    2268 build_images.go:217] Built localhost/my-image:functional-522000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2511577769.tar
I0819 03:46:14.604630    2268 build_images.go:133] succeeded building to: functional-522000
I0819 03:46:14.604633    2268 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
2024/08/19 03:46:09 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.932694041s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-522000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 image load --daemon kicbase/echo-server:functional-522000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-522000 docker-env) && out/minikube-darwin-arm64 status -p functional-522000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-522000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 image load --daemon kicbase/echo-server:functional-522000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-522000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 image load --daemon kicbase/echo-server:functional-522000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 image save kicbase/echo-server:functional-522000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 image rm kicbase/echo-server:functional-522000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-522000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-522000 image save --daemon kicbase/echo-server:functional-522000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-522000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.17s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-522000
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-522000
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-522000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (176.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-927000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0819 03:46:16.687095    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/addons-758000/client.crt: no such file or directory" logger="UnhandledError"
E0819 03:48:32.807671    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/addons-758000/client.crt: no such file or directory" logger="UnhandledError"
E0819 03:49:00.540666    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/addons-758000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-927000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (2m56.596869708s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (176.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (3.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-927000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-927000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-927000 -- rollout status deployment/busybox: (2.549783917s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-927000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-927000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-927000 -- exec busybox-7dff88458-64pht -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-927000 -- exec busybox-7dff88458-b8dds -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-927000 -- exec busybox-7dff88458-wsffr -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-927000 -- exec busybox-7dff88458-64pht -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-927000 -- exec busybox-7dff88458-b8dds -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-927000 -- exec busybox-7dff88458-wsffr -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-927000 -- exec busybox-7dff88458-64pht -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-927000 -- exec busybox-7dff88458-b8dds -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-927000 -- exec busybox-7dff88458-wsffr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (3.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-927000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-927000 -- exec busybox-7dff88458-64pht -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-927000 -- exec busybox-7dff88458-64pht -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-927000 -- exec busybox-7dff88458-b8dds -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-927000 -- exec busybox-7dff88458-b8dds -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-927000 -- exec busybox-7dff88458-wsffr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-927000 -- exec busybox-7dff88458-wsffr -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (56.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-927000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-927000 -v=7 --alsologtostderr: (55.889550167s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (56.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-927000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 cp testdata/cp-test.txt ha-927000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 ssh -n ha-927000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 cp ha-927000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile1347387617/001/cp-test_ha-927000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 ssh -n ha-927000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 cp ha-927000:/home/docker/cp-test.txt ha-927000-m02:/home/docker/cp-test_ha-927000_ha-927000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 ssh -n ha-927000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 ssh -n ha-927000-m02 "sudo cat /home/docker/cp-test_ha-927000_ha-927000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 cp ha-927000:/home/docker/cp-test.txt ha-927000-m03:/home/docker/cp-test_ha-927000_ha-927000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 ssh -n ha-927000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 ssh -n ha-927000-m03 "sudo cat /home/docker/cp-test_ha-927000_ha-927000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 cp ha-927000:/home/docker/cp-test.txt ha-927000-m04:/home/docker/cp-test_ha-927000_ha-927000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 ssh -n ha-927000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 ssh -n ha-927000-m04 "sudo cat /home/docker/cp-test_ha-927000_ha-927000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 cp testdata/cp-test.txt ha-927000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 ssh -n ha-927000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 cp ha-927000-m02:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile1347387617/001/cp-test_ha-927000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 ssh -n ha-927000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 cp ha-927000-m02:/home/docker/cp-test.txt ha-927000:/home/docker/cp-test_ha-927000-m02_ha-927000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 ssh -n ha-927000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 ssh -n ha-927000 "sudo cat /home/docker/cp-test_ha-927000-m02_ha-927000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 cp ha-927000-m02:/home/docker/cp-test.txt ha-927000-m03:/home/docker/cp-test_ha-927000-m02_ha-927000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 ssh -n ha-927000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 ssh -n ha-927000-m03 "sudo cat /home/docker/cp-test_ha-927000-m02_ha-927000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 cp ha-927000-m02:/home/docker/cp-test.txt ha-927000-m04:/home/docker/cp-test_ha-927000-m02_ha-927000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 ssh -n ha-927000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 ssh -n ha-927000-m04 "sudo cat /home/docker/cp-test_ha-927000-m02_ha-927000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 cp testdata/cp-test.txt ha-927000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 ssh -n ha-927000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 cp ha-927000-m03:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile1347387617/001/cp-test_ha-927000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 ssh -n ha-927000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 cp ha-927000-m03:/home/docker/cp-test.txt ha-927000:/home/docker/cp-test_ha-927000-m03_ha-927000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 ssh -n ha-927000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 ssh -n ha-927000 "sudo cat /home/docker/cp-test_ha-927000-m03_ha-927000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 cp ha-927000-m03:/home/docker/cp-test.txt ha-927000-m02:/home/docker/cp-test_ha-927000-m03_ha-927000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 ssh -n ha-927000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 ssh -n ha-927000-m02 "sudo cat /home/docker/cp-test_ha-927000-m03_ha-927000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 cp ha-927000-m03:/home/docker/cp-test.txt ha-927000-m04:/home/docker/cp-test_ha-927000-m03_ha-927000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 ssh -n ha-927000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 ssh -n ha-927000-m04 "sudo cat /home/docker/cp-test_ha-927000-m03_ha-927000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 cp testdata/cp-test.txt ha-927000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 ssh -n ha-927000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 cp ha-927000-m04:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile1347387617/001/cp-test_ha-927000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 ssh -n ha-927000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 cp ha-927000-m04:/home/docker/cp-test.txt ha-927000:/home/docker/cp-test_ha-927000-m04_ha-927000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 ssh -n ha-927000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 ssh -n ha-927000 "sudo cat /home/docker/cp-test_ha-927000-m04_ha-927000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 cp ha-927000-m04:/home/docker/cp-test.txt ha-927000-m02:/home/docker/cp-test_ha-927000-m04_ha-927000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 ssh -n ha-927000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 ssh -n ha-927000-m02 "sudo cat /home/docker/cp-test_ha-927000-m04_ha-927000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 cp ha-927000-m04:/home/docker/cp-test.txt ha-927000-m03:/home/docker/cp-test_ha-927000-m04_ha-927000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 ssh -n ha-927000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-927000 ssh -n ha-927000-m03 "sudo cat /home/docker/cp-test_ha-927000-m04_ha-927000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (77.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0819 03:59:55.894734    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/addons-758000/client.crt: no such file or directory" logger="UnhandledError"
E0819 04:00:18.692527    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/functional-522000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m17.883144833s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (77.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.32s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-210000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-210000 --output=json --user=testUser: (3.318606875s)
--- PASS: TestJSONOutput/stop/Command (3.32s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-017000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-017000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (93.971ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0bfbdea7-f052-4e85-8503-fe906b15d5ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-017000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"53184b03-0b04-41b0-84de-c2b3109de5d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19476"}}
	{"specversion":"1.0","id":"6034643f-59a0-4702-a837-a2d9dede6d4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig"}}
	{"specversion":"1.0","id":"037ce297-6d69-4afe-873b-e1aa41f0e048","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"a6fa65d5-c753-4a84-a426-de00f02874ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"bcc13269-2281-41a9-a564-39589fa6e58b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube"}}
	{"specversion":"1.0","id":"02d18dad-c081-47d4-a3b0-253a584a38df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"957fbb0e-378c-4a9f-b8aa-33b5b02d96fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-017000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-017000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.75s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-182000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-182000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (100.880125ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-182000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19476-967/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19476-967/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-182000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-182000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (40.063125ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-182000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-182000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
E0819 04:23:21.768057    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/functional-522000/client.crt: no such file or directory" logger="UnhandledError"
E0819 04:23:32.783694    1434 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19476-967/.minikube/profiles/addons-758000/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.73736875s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.65210525s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-182000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-182000: (2.071347042s)
--- PASS: TestNoKubernetes/serial/Stop (2.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-182000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-182000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (40.990666ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-182000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-182000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.62s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-446000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-971000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-971000 --alsologtostderr -v=3: (3.23175325s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-971000 -n old-k8s-version-971000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-971000 -n old-k8s-version-971000: exit status 7 (53.730334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-971000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-752000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-752000 --alsologtostderr -v=3: (3.734751792s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-752000 -n no-preload-752000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-752000 -n no-preload-752000: exit status 7 (49.768291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-752000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (2.79s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-102000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-102000 --alsologtostderr -v=3: (2.786171375s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (2.79s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-102000 -n embed-certs-102000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-102000 -n embed-certs-102000: exit status 7 (57.767417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-102000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-030000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-030000 --alsologtostderr -v=3: (3.668841875s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.67s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-030000 -n default-k8s-diff-port-030000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-030000 -n default-k8s-diff-port-030000: exit status 7 (56.118292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-030000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-260000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (4.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-260000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-260000 --alsologtostderr -v=3: (4.089812333s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (4.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-260000 -n newest-cni-260000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-260000 -n newest-cni-260000: exit status 7 (61.431667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-260000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (21/274)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-745000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-745000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-745000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-745000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-745000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-745000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-745000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-745000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-745000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-745000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-745000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-745000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-745000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-745000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-745000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-745000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-745000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-745000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-745000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-745000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-745000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-745000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-745000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-745000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-745000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-745000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-745000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-745000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-745000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-745000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-745000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-745000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-745000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-745000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-745000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-745000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-745000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-745000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-745000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-745000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-745000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-745000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-745000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-745000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-745000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-745000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-745000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-745000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-745000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-745000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-745000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-745000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-745000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-745000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-745000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-745000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-745000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-745000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-745000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-745000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-745000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-745000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-745000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-745000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-745000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-745000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-745000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-745000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-745000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-745000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-745000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-745000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-745000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-745000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-745000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-745000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-745000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-745000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-745000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-745000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-745000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-745000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-745000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-745000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-745000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-745000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-745000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-745000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-745000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-745000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-745000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-745000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-745000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-745000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-745000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-745000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-745000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-745000"

                                                
                                                
----------------------- debugLogs end: cilium-745000 [took: 2.168624333s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-745000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-745000
--- SKIP: TestNetworkPlugins/group/cilium (2.27s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-000000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-000000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.10s)

                                                
                                    
Copied to clipboard