Test Report: QEMU_macOS 19501

                    
                      483e94d4f5cf3f9f4d946099f728195390e8d80c:2024-08-26:35948
                    
                

Test fail (97/274)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 18.97
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 10.09
46 TestCertOptions 10.19
47 TestCertExpiration 196.02
48 TestDockerFlags 12.76
49 TestForceSystemdFlag 10.05
50 TestForceSystemdEnv 10.15
95 TestFunctional/parallel/ServiceCmdConnect 34.19
167 TestMultiControlPlane/serial/StopSecondaryNode 214.1
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 104.33
169 TestMultiControlPlane/serial/RestartSecondaryNode 208.23
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 234.42
172 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
174 TestMultiControlPlane/serial/StopCluster 202.07
175 TestMultiControlPlane/serial/RestartCluster 5.25
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
177 TestMultiControlPlane/serial/AddSecondaryNode 0.07
181 TestImageBuild/serial/Setup 10.06
184 TestJSONOutput/start/Command 9.97
190 TestJSONOutput/pause/Command 0.08
196 TestJSONOutput/unpause/Command 0.05
213 TestMinikubeProfile 10.34
216 TestMountStart/serial/StartWithMountFirst 10
219 TestMultiNode/serial/FreshStart2Nodes 9.9
220 TestMultiNode/serial/DeployApp2Nodes 109.32
221 TestMultiNode/serial/PingHostFrom2Pods 0.09
222 TestMultiNode/serial/AddNode 0.07
223 TestMultiNode/serial/MultiNodeLabels 0.06
224 TestMultiNode/serial/ProfileList 0.08
225 TestMultiNode/serial/CopyFile 0.06
226 TestMultiNode/serial/StopNode 0.13
227 TestMultiNode/serial/StartAfterStop 45.53
228 TestMultiNode/serial/RestartKeepsNodes 8.98
229 TestMultiNode/serial/DeleteNode 0.1
230 TestMultiNode/serial/StopMultiNode 1.93
231 TestMultiNode/serial/RestartMultiNode 5.26
232 TestMultiNode/serial/ValidateNameConflict 20.5
236 TestPreload 10.21
238 TestScheduledStopUnix 9.93
239 TestSkaffold 12.5
242 TestRunningBinaryUpgrade 645.39
244 TestKubernetesUpgrade 19
258 TestStoppedBinaryUpgrade/Upgrade 595.56
268 TestPause/serial/Start 9.87
271 TestNoKubernetes/serial/StartWithK8s 9.95
272 TestNoKubernetes/serial/StartWithStopK8s 5.32
273 TestNoKubernetes/serial/Start 5.34
277 TestNoKubernetes/serial/StartNoArgs 6.61
278 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.8
279 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.58
281 TestNetworkPlugins/group/auto/Start 9.78
282 TestNetworkPlugins/group/calico/Start 9.83
283 TestNetworkPlugins/group/custom-flannel/Start 9.82
284 TestNetworkPlugins/group/false/Start 10.01
285 TestNetworkPlugins/group/kindnet/Start 9.85
286 TestNetworkPlugins/group/flannel/Start 9.83
287 TestNetworkPlugins/group/enable-default-cni/Start 10.2
288 TestNetworkPlugins/group/bridge/Start 10.01
289 TestNetworkPlugins/group/kubenet/Start 9.89
291 TestStartStop/group/old-k8s-version/serial/FirstStart 10.13
292 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
293 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
296 TestStartStop/group/old-k8s-version/serial/SecondStart 5.26
297 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
298 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
299 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
300 TestStartStop/group/old-k8s-version/serial/Pause 0.1
302 TestStartStop/group/no-preload/serial/FirstStart 10.36
303 TestStartStop/group/no-preload/serial/DeployApp 0.09
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
307 TestStartStop/group/no-preload/serial/SecondStart 5.26
308 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
309 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.05
310 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
311 TestStartStop/group/no-preload/serial/Pause 0.1
313 TestStartStop/group/embed-certs/serial/FirstStart 10.07
314 TestStartStop/group/embed-certs/serial/DeployApp 0.09
315 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
318 TestStartStop/group/embed-certs/serial/SecondStart 5.26
319 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
320 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
321 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
322 TestStartStop/group/embed-certs/serial/Pause 0.1
324 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 10.06
326 TestStartStop/group/newest-cni/serial/FirstStart 10.08
327 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
331 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 6.35
336 TestStartStop/group/newest-cni/serial/SecondStart 5.25
337 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
338 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
339 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
340 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.08
344 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (18.97s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-004000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-004000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (18.968395541s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"467864c4-3aff-47c7-bb42-3d626625d6d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-004000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a80cf9f9-cb90-4e0a-8863-aa3a3d3a321c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19501"}}
	{"specversion":"1.0","id":"154806d3-b736-47d2-961e-bb2ead4c0b97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig"}}
	{"specversion":"1.0","id":"ec5d30ce-b832-4806-a057-bda9faefd964","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"72a13494-7fd6-4668-8c3f-e15bde313035","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f60fff8c-36b2-4b0f-9a53-3bb2ab95c354","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube"}}
	{"specversion":"1.0","id":"c0a6086b-c2a6-4cd8-aca7-318e6b956bbe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"abacfb32-8a58-4bdd-98ed-f1d48827e76b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"08da0772-bc2b-4d76-b743-e466bdb176a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"32dca4f6-abe9-4976-8347-b94a96c2b1b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"181e98e4-5041-49c0-9fad-cd2dde805e19","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-004000\" primary control-plane node in \"download-only-004000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"5c3ae81d-3be6-4c52-bf22-f383cd037899","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"c0ebf6b0-8c16-4bca-9a11-1f5b37ca24e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19501-1045/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108e23920 0x108e23920 0x108e23920 0x108e23920 0x108e23920 0x108e23920 0x108e23920] Decompressors:map[bz2:0x1400050e300 gz:0x1400050e308 tar:0x1400050e2a0 tar.bz2:0x1400050e2c0 tar.gz:0x1400050e2d0 tar.xz:0x1400050e2e0 tar.zst:0x1400050e2f0 tbz2:0x1400050e2c0 tgz:0x14
00050e2d0 txz:0x1400050e2e0 tzst:0x1400050e2f0 xz:0x1400050e310 zip:0x1400050e320 zst:0x1400050e318] Getters:map[file:0x140004a6d00 http:0x140005d2190 https:0x140005d21e0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"614b6b76-2734-45ca-b05a-2f92e53c2943","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 03:34:32.408501    1541 out.go:345] Setting OutFile to fd 1 ...
	I0826 03:34:32.408650    1541 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 03:34:32.408654    1541 out.go:358] Setting ErrFile to fd 2...
	I0826 03:34:32.408656    1541 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 03:34:32.408784    1541 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	W0826 03:34:32.408881    1541 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19501-1045/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19501-1045/.minikube/config/config.json: no such file or directory
	I0826 03:34:32.410181    1541 out.go:352] Setting JSON to true
	I0826 03:34:32.427428    1541 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":236,"bootTime":1724668236,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0826 03:34:32.427491    1541 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0826 03:34:32.433171    1541 out.go:97] [download-only-004000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0826 03:34:32.433343    1541 notify.go:220] Checking for updates...
	W0826 03:34:32.433352    1541 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball: no such file or directory
	I0826 03:34:32.438105    1541 out.go:169] MINIKUBE_LOCATION=19501
	I0826 03:34:32.444103    1541 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	I0826 03:34:32.449163    1541 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0826 03:34:32.453058    1541 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 03:34:32.456062    1541 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	W0826 03:34:32.462049    1541 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0826 03:34:32.462249    1541 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 03:34:32.467043    1541 out.go:97] Using the qemu2 driver based on user configuration
	I0826 03:34:32.467063    1541 start.go:297] selected driver: qemu2
	I0826 03:34:32.467077    1541 start.go:901] validating driver "qemu2" against <nil>
	I0826 03:34:32.467146    1541 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0826 03:34:32.471113    1541 out.go:169] Automatically selected the socket_vmnet network
	I0826 03:34:32.476579    1541 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0826 03:34:32.476671    1541 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0826 03:34:32.476705    1541 cni.go:84] Creating CNI manager for ""
	I0826 03:34:32.476722    1541 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0826 03:34:32.476770    1541 start.go:340] cluster config:
	{Name:download-only-004000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-004000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 03:34:32.481871    1541 iso.go:125] acquiring lock: {Name:mk859bee1c7de58c8a10e75b01bd87b0e1e74bdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 03:34:32.486066    1541 out.go:97] Downloading VM boot image ...
	I0826 03:34:32.486091    1541 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso
	I0826 03:34:44.138668    1541 out.go:97] Starting "download-only-004000" primary control-plane node in "download-only-004000" cluster
	I0826 03:34:44.138686    1541 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0826 03:34:44.210563    1541 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0826 03:34:44.210569    1541 cache.go:56] Caching tarball of preloaded images
	I0826 03:34:44.210735    1541 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0826 03:34:44.214925    1541 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0826 03:34:44.214932    1541 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0826 03:34:44.308360    1541 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0826 03:34:50.059751    1541 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0826 03:34:50.059928    1541 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0826 03:34:50.755714    1541 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0826 03:34:50.755923    1541 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/download-only-004000/config.json ...
	I0826 03:34:50.755942    1541 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/download-only-004000/config.json: {Name:mkfe3aa789db55db5093ac99da8ea4bd6b2ffa89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 03:34:50.756162    1541 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0826 03:34:50.756332    1541 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0826 03:34:51.303970    1541 out.go:193] 
	W0826 03:34:51.310953    1541 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19501-1045/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108e23920 0x108e23920 0x108e23920 0x108e23920 0x108e23920 0x108e23920 0x108e23920] Decompressors:map[bz2:0x1400050e300 gz:0x1400050e308 tar:0x1400050e2a0 tar.bz2:0x1400050e2c0 tar.gz:0x1400050e2d0 tar.xz:0x1400050e2e0 tar.zst:0x1400050e2f0 tbz2:0x1400050e2c0 tgz:0x1400050e2d0 txz:0x1400050e2e0 tzst:0x1400050e2f0 xz:0x1400050e310 zip:0x1400050e320 zst:0x1400050e318] Getters:map[file:0x140004a6d00 http:0x140005d2190 https:0x140005d21e0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0826 03:34:51.310977    1541 out_reason.go:110] 
	W0826 03:34:51.319896    1541 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0826 03:34:51.322925    1541 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-004000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (18.97s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19501-1045/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.09s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-572000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-572000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.946880917s)

                                                
                                                
-- stdout --
	* [offline-docker-572000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-572000" primary control-plane node in "offline-docker-572000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-572000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:11:49.019958    3935 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:11:49.020097    3935 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:11:49.020100    3935 out.go:358] Setting ErrFile to fd 2...
	I0826 04:11:49.020102    3935 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:11:49.020238    3935 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:11:49.021361    3935 out.go:352] Setting JSON to false
	I0826 04:11:49.039022    3935 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2472,"bootTime":1724668237,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0826 04:11:49.039097    3935 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0826 04:11:49.043453    3935 out.go:177] * [offline-docker-572000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0826 04:11:49.052515    3935 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 04:11:49.052543    3935 notify.go:220] Checking for updates...
	I0826 04:11:49.058778    3935 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	I0826 04:11:49.061383    3935 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0826 04:11:49.064418    3935 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 04:11:49.067471    3935 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	I0826 04:11:49.070439    3935 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 04:11:49.073821    3935 config.go:182] Loaded profile config "multinode-143000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:11:49.073883    3935 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 04:11:49.077374    3935 out.go:177] * Using the qemu2 driver based on user configuration
	I0826 04:11:49.084432    3935 start.go:297] selected driver: qemu2
	I0826 04:11:49.084442    3935 start.go:901] validating driver "qemu2" against <nil>
	I0826 04:11:49.084450    3935 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 04:11:49.086399    3935 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0826 04:11:49.089438    3935 out.go:177] * Automatically selected the socket_vmnet network
	I0826 04:11:49.092415    3935 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 04:11:49.092432    3935 cni.go:84] Creating CNI manager for ""
	I0826 04:11:49.092439    3935 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0826 04:11:49.092449    3935 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0826 04:11:49.092485    3935 start.go:340] cluster config:
	{Name:offline-docker-572000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-572000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 04:11:49.095984    3935 iso.go:125] acquiring lock: {Name:mk859bee1c7de58c8a10e75b01bd87b0e1e74bdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:11:49.103254    3935 out.go:177] * Starting "offline-docker-572000" primary control-plane node in "offline-docker-572000" cluster
	I0826 04:11:49.107357    3935 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0826 04:11:49.107384    3935 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0826 04:11:49.107393    3935 cache.go:56] Caching tarball of preloaded images
	I0826 04:11:49.107480    3935 preload.go:172] Found /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0826 04:11:49.107486    3935 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0826 04:11:49.107557    3935 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/offline-docker-572000/config.json ...
	I0826 04:11:49.107568    3935 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/offline-docker-572000/config.json: {Name:mk2dd5653edeb77bc9c597608abe6dd98f956681 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 04:11:49.107793    3935 start.go:360] acquireMachinesLock for offline-docker-572000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:11:49.107825    3935 start.go:364] duration metric: took 25.709µs to acquireMachinesLock for "offline-docker-572000"
	I0826 04:11:49.107836    3935 start.go:93] Provisioning new machine with config: &{Name:offline-docker-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-572000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0826 04:11:49.107868    3935 start.go:125] createHost starting for "" (driver="qemu2")
	I0826 04:11:49.116398    3935 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0826 04:11:49.132250    3935 start.go:159] libmachine.API.Create for "offline-docker-572000" (driver="qemu2")
	I0826 04:11:49.132279    3935 client.go:168] LocalClient.Create starting
	I0826 04:11:49.132356    3935 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem
	I0826 04:11:49.132386    3935 main.go:141] libmachine: Decoding PEM data...
	I0826 04:11:49.132397    3935 main.go:141] libmachine: Parsing certificate...
	I0826 04:11:49.132442    3935 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/cert.pem
	I0826 04:11:49.132464    3935 main.go:141] libmachine: Decoding PEM data...
	I0826 04:11:49.132472    3935 main.go:141] libmachine: Parsing certificate...
	I0826 04:11:49.132828    3935 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19501-1045/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0826 04:11:49.287671    3935 main.go:141] libmachine: Creating SSH key...
	I0826 04:11:49.531794    3935 main.go:141] libmachine: Creating Disk image...
	I0826 04:11:49.531805    3935 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0826 04:11:49.532200    3935 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/offline-docker-572000/disk.qcow2.raw /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/offline-docker-572000/disk.qcow2
	I0826 04:11:49.547539    3935 main.go:141] libmachine: STDOUT: 
	I0826 04:11:49.547566    3935 main.go:141] libmachine: STDERR: 
	I0826 04:11:49.547617    3935 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/offline-docker-572000/disk.qcow2 +20000M
	I0826 04:11:49.556031    3935 main.go:141] libmachine: STDOUT: Image resized.
	
	I0826 04:11:49.556052    3935 main.go:141] libmachine: STDERR: 
	I0826 04:11:49.556073    3935 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/offline-docker-572000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/offline-docker-572000/disk.qcow2
	I0826 04:11:49.556078    3935 main.go:141] libmachine: Starting QEMU VM...
	I0826 04:11:49.556088    3935 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:11:49.556114    3935 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/offline-docker-572000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/offline-docker-572000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/offline-docker-572000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:e4:63:34:27:be -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/offline-docker-572000/disk.qcow2
	I0826 04:11:49.557896    3935 main.go:141] libmachine: STDOUT: 
	I0826 04:11:49.557912    3935 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:11:49.557939    3935 client.go:171] duration metric: took 425.661625ms to LocalClient.Create
	I0826 04:11:51.560029    3935 start.go:128] duration metric: took 2.452191958s to createHost
	I0826 04:11:51.560051    3935 start.go:83] releasing machines lock for "offline-docker-572000", held for 2.45226075s
	W0826 04:11:51.560083    3935 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:11:51.578456    3935 out.go:177] * Deleting "offline-docker-572000" in qemu2 ...
	W0826 04:11:51.589667    3935 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:11:51.589679    3935 start.go:729] Will try again in 5 seconds ...
	I0826 04:11:56.591976    3935 start.go:360] acquireMachinesLock for offline-docker-572000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:11:56.592454    3935 start.go:364] duration metric: took 354.5µs to acquireMachinesLock for "offline-docker-572000"
	I0826 04:11:56.592600    3935 start.go:93] Provisioning new machine with config: &{Name:offline-docker-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-572000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0826 04:11:56.592916    3935 start.go:125] createHost starting for "" (driver="qemu2")
	I0826 04:11:56.610439    3935 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0826 04:11:56.660818    3935 start.go:159] libmachine.API.Create for "offline-docker-572000" (driver="qemu2")
	I0826 04:11:56.660881    3935 client.go:168] LocalClient.Create starting
	I0826 04:11:56.660997    3935 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem
	I0826 04:11:56.661061    3935 main.go:141] libmachine: Decoding PEM data...
	I0826 04:11:56.661080    3935 main.go:141] libmachine: Parsing certificate...
	I0826 04:11:56.661142    3935 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/cert.pem
	I0826 04:11:56.661186    3935 main.go:141] libmachine: Decoding PEM data...
	I0826 04:11:56.661197    3935 main.go:141] libmachine: Parsing certificate...
	I0826 04:11:56.661711    3935 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19501-1045/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0826 04:11:56.825931    3935 main.go:141] libmachine: Creating SSH key...
	I0826 04:11:56.866833    3935 main.go:141] libmachine: Creating Disk image...
	I0826 04:11:56.866842    3935 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0826 04:11:56.867018    3935 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/offline-docker-572000/disk.qcow2.raw /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/offline-docker-572000/disk.qcow2
	I0826 04:11:56.876060    3935 main.go:141] libmachine: STDOUT: 
	I0826 04:11:56.876078    3935 main.go:141] libmachine: STDERR: 
	I0826 04:11:56.876134    3935 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/offline-docker-572000/disk.qcow2 +20000M
	I0826 04:11:56.883878    3935 main.go:141] libmachine: STDOUT: Image resized.
	
	I0826 04:11:56.883892    3935 main.go:141] libmachine: STDERR: 
	I0826 04:11:56.883903    3935 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/offline-docker-572000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/offline-docker-572000/disk.qcow2
	I0826 04:11:56.883907    3935 main.go:141] libmachine: Starting QEMU VM...
	I0826 04:11:56.883919    3935 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:11:56.883952    3935 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/offline-docker-572000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/offline-docker-572000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/offline-docker-572000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:05:b0:a9:8e:eb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/offline-docker-572000/disk.qcow2
	I0826 04:11:56.885459    3935 main.go:141] libmachine: STDOUT: 
	I0826 04:11:56.885476    3935 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:11:56.885498    3935 client.go:171] duration metric: took 224.607791ms to LocalClient.Create
	I0826 04:11:58.887640    3935 start.go:128] duration metric: took 2.294724209s to createHost
	I0826 04:11:58.887752    3935 start.go:83] releasing machines lock for "offline-docker-572000", held for 2.295306166s
	W0826 04:11:58.888078    3935 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-572000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-572000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:11:58.902634    3935 out.go:201] 
	W0826 04:11:58.907906    3935 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0826 04:11:58.907935    3935 out.go:270] * 
	* 
	W0826 04:11:58.910533    3935 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0826 04:11:58.922881    3935 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-572000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-08-26 04:11:58.939333 -0700 PDT m=+2246.573810001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-572000 -n offline-docker-572000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-572000 -n offline-docker-572000: exit status 7 (65.489125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-572000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-572000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-572000
--- FAIL: TestOffline (10.09s)

                                                
                                    
x
+
TestCertOptions (10.19s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-682000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-682000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.926720417s)

                                                
                                                
-- stdout --
	* [cert-options-682000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-682000" primary control-plane node in "cert-options-682000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-682000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-682000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-682000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-682000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-682000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (82.626ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-682000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-682000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-682000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-682000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-682000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-682000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (40.351541ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-682000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-682000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-682000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-682000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-682000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-08-26 04:23:43.837556 -0700 PDT m=+2951.505982668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-682000 -n cert-options-682000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-682000 -n cert-options-682000: exit status 7 (30.125125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-682000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-682000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-682000
--- FAIL: TestCertOptions (10.19s)

                                                
                                    
x
+
TestCertExpiration (196.02s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-652000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-652000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.644048541s)

                                                
                                                
-- stdout --
	* [cert-expiration-652000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-652000" primary control-plane node in "cert-expiration-652000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-652000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-652000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-652000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-652000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-652000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.221444083s)

                                                
                                                
-- stdout --
	* [cert-expiration-652000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-652000" primary control-plane node in "cert-expiration-652000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-652000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-652000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-652000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-652000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-652000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-652000" primary control-plane node in "cert-expiration-652000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-652000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-652000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-652000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-08-26 04:26:36.183954 -0700 PDT m=+3123.856361543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-652000 -n cert-expiration-652000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-652000 -n cert-expiration-652000: exit status 7 (69.91475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-652000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-652000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-652000
--- FAIL: TestCertExpiration (196.02s)

                                                
                                    
x
+
TestDockerFlags (12.76s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-417000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
E0826 04:23:21.623536    1539 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/addons-293000/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-417000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (12.514923917s)

                                                
                                                
-- stdout --
	* [docker-flags-417000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-417000" primary control-plane node in "docker-flags-417000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-417000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:23:21.027916    4556 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:23:21.028072    4556 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:23:21.028077    4556 out.go:358] Setting ErrFile to fd 2...
	I0826 04:23:21.028079    4556 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:23:21.028207    4556 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:23:21.029470    4556 out.go:352] Setting JSON to false
	I0826 04:23:21.049092    4556 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3164,"bootTime":1724668237,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0826 04:23:21.049176    4556 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0826 04:23:21.097369    4556 out.go:177] * [docker-flags-417000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0826 04:23:21.107401    4556 notify.go:220] Checking for updates...
	I0826 04:23:21.112362    4556 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 04:23:21.118322    4556 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	I0826 04:23:21.129461    4556 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0826 04:23:21.136329    4556 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 04:23:21.143314    4556 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	I0826 04:23:21.151270    4556 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 04:23:21.155880    4556 config.go:182] Loaded profile config "cert-expiration-652000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:23:21.155981    4556 config.go:182] Loaded profile config "multinode-143000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:23:21.156084    4556 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 04:23:21.159390    4556 out.go:177] * Using the qemu2 driver based on user configuration
	I0826 04:23:21.167396    4556 start.go:297] selected driver: qemu2
	I0826 04:23:21.167406    4556 start.go:901] validating driver "qemu2" against <nil>
	I0826 04:23:21.167416    4556 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 04:23:21.171204    4556 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0826 04:23:21.182407    4556 out.go:177] * Automatically selected the socket_vmnet network
	I0826 04:23:21.185553    4556 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0826 04:23:21.185587    4556 cni.go:84] Creating CNI manager for ""
	I0826 04:23:21.185600    4556 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0826 04:23:21.185611    4556 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0826 04:23:21.185653    4556 start.go:340] cluster config:
	{Name:docker-flags-417000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-417000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 04:23:21.191315    4556 iso.go:125] acquiring lock: {Name:mk859bee1c7de58c8a10e75b01bd87b0e1e74bdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:23:21.211362    4556 out.go:177] * Starting "docker-flags-417000" primary control-plane node in "docker-flags-417000" cluster
	I0826 04:23:21.215186    4556 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0826 04:23:21.215223    4556 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0826 04:23:21.215240    4556 cache.go:56] Caching tarball of preloaded images
	I0826 04:23:21.215367    4556 preload.go:172] Found /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0826 04:23:21.215376    4556 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0826 04:23:21.215479    4556 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/docker-flags-417000/config.json ...
	I0826 04:23:21.215495    4556 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/docker-flags-417000/config.json: {Name:mk25c10bb00f5fe0497eab699f28f199802354a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 04:23:21.215857    4556 start.go:360] acquireMachinesLock for docker-flags-417000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:23:23.542906    4556 start.go:364] duration metric: took 2.327067125s to acquireMachinesLock for "docker-flags-417000"
	I0826 04:23:23.543048    4556 start.go:93] Provisioning new machine with config: &{Name:docker-flags-417000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-417000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0826 04:23:23.543310    4556 start.go:125] createHost starting for "" (driver="qemu2")
	I0826 04:23:23.554926    4556 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0826 04:23:23.604379    4556 start.go:159] libmachine.API.Create for "docker-flags-417000" (driver="qemu2")
	I0826 04:23:23.604428    4556 client.go:168] LocalClient.Create starting
	I0826 04:23:23.604547    4556 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem
	I0826 04:23:23.604601    4556 main.go:141] libmachine: Decoding PEM data...
	I0826 04:23:23.604619    4556 main.go:141] libmachine: Parsing certificate...
	I0826 04:23:23.604688    4556 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/cert.pem
	I0826 04:23:23.604732    4556 main.go:141] libmachine: Decoding PEM data...
	I0826 04:23:23.604747    4556 main.go:141] libmachine: Parsing certificate...
	I0826 04:23:23.605488    4556 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19501-1045/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0826 04:23:23.848638    4556 main.go:141] libmachine: Creating SSH key...
	I0826 04:23:23.900140    4556 main.go:141] libmachine: Creating Disk image...
	I0826 04:23:23.900145    4556 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0826 04:23:23.900332    4556 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/docker-flags-417000/disk.qcow2.raw /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/docker-flags-417000/disk.qcow2
	I0826 04:23:23.909552    4556 main.go:141] libmachine: STDOUT: 
	I0826 04:23:23.909572    4556 main.go:141] libmachine: STDERR: 
	I0826 04:23:23.909612    4556 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/docker-flags-417000/disk.qcow2 +20000M
	I0826 04:23:23.917298    4556 main.go:141] libmachine: STDOUT: Image resized.
	
	I0826 04:23:23.917311    4556 main.go:141] libmachine: STDERR: 
	I0826 04:23:23.917320    4556 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/docker-flags-417000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/docker-flags-417000/disk.qcow2
	I0826 04:23:23.917324    4556 main.go:141] libmachine: Starting QEMU VM...
	I0826 04:23:23.917336    4556 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:23:23.917369    4556 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/docker-flags-417000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/docker-flags-417000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/docker-flags-417000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:dd:2f:08:9c:d8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/docker-flags-417000/disk.qcow2
	I0826 04:23:23.918852    4556 main.go:141] libmachine: STDOUT: 
	I0826 04:23:23.918867    4556 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:23:23.918892    4556 client.go:171] duration metric: took 314.457833ms to LocalClient.Create
	I0826 04:23:25.921025    4556 start.go:128] duration metric: took 2.3777435s to createHost
	I0826 04:23:25.921086    4556 start.go:83] releasing machines lock for "docker-flags-417000", held for 2.378160084s
	W0826 04:23:25.921151    4556 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:23:25.942362    4556 out.go:177] * Deleting "docker-flags-417000" in qemu2 ...
	W0826 04:23:25.978147    4556 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:23:25.978169    4556 start.go:729] Will try again in 5 seconds ...
	I0826 04:23:30.980224    4556 start.go:360] acquireMachinesLock for docker-flags-417000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:23:30.980653    4556 start.go:364] duration metric: took 353.417µs to acquireMachinesLock for "docker-flags-417000"
	I0826 04:23:30.980771    4556 start.go:93] Provisioning new machine with config: &{Name:docker-flags-417000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-417000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0826 04:23:30.980969    4556 start.go:125] createHost starting for "" (driver="qemu2")
	I0826 04:23:30.988006    4556 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0826 04:23:31.037990    4556 start.go:159] libmachine.API.Create for "docker-flags-417000" (driver="qemu2")
	I0826 04:23:31.038041    4556 client.go:168] LocalClient.Create starting
	I0826 04:23:31.038142    4556 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem
	I0826 04:23:31.038198    4556 main.go:141] libmachine: Decoding PEM data...
	I0826 04:23:31.038216    4556 main.go:141] libmachine: Parsing certificate...
	I0826 04:23:31.038271    4556 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/cert.pem
	I0826 04:23:31.038302    4556 main.go:141] libmachine: Decoding PEM data...
	I0826 04:23:31.038326    4556 main.go:141] libmachine: Parsing certificate...
	I0826 04:23:31.039116    4556 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19501-1045/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0826 04:23:31.256517    4556 main.go:141] libmachine: Creating SSH key...
	I0826 04:23:31.445664    4556 main.go:141] libmachine: Creating Disk image...
	I0826 04:23:31.445671    4556 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0826 04:23:31.445881    4556 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/docker-flags-417000/disk.qcow2.raw /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/docker-flags-417000/disk.qcow2
	I0826 04:23:31.459973    4556 main.go:141] libmachine: STDOUT: 
	I0826 04:23:31.459997    4556 main.go:141] libmachine: STDERR: 
	I0826 04:23:31.460047    4556 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/docker-flags-417000/disk.qcow2 +20000M
	I0826 04:23:31.468408    4556 main.go:141] libmachine: STDOUT: Image resized.
	
	I0826 04:23:31.468423    4556 main.go:141] libmachine: STDERR: 
	I0826 04:23:31.468443    4556 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/docker-flags-417000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/docker-flags-417000/disk.qcow2
	I0826 04:23:31.468449    4556 main.go:141] libmachine: Starting QEMU VM...
	I0826 04:23:31.468458    4556 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:23:31.468487    4556 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/docker-flags-417000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/docker-flags-417000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/docker-flags-417000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:f8:85:20:91:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/docker-flags-417000/disk.qcow2
	I0826 04:23:31.470000    4556 main.go:141] libmachine: STDOUT: 
	I0826 04:23:31.470015    4556 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:23:31.470026    4556 client.go:171] duration metric: took 431.988417ms to LocalClient.Create
	I0826 04:23:33.472148    4556 start.go:128] duration metric: took 2.491181417s to createHost
	I0826 04:23:33.472218    4556 start.go:83] releasing machines lock for "docker-flags-417000", held for 2.491596458s
	W0826 04:23:33.472625    4556 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-417000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-417000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:23:33.482198    4556 out.go:201] 
	W0826 04:23:33.486282    4556 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0826 04:23:33.486310    4556 out.go:270] * 
	* 
	W0826 04:23:33.488855    4556 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0826 04:23:33.497196    4556 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-417000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-417000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-417000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (81.435084ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-417000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-417000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-417000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-417000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-417000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-417000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-417000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-417000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-417000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (46.821333ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-417000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-417000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-417000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-417000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-417000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-417000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-08-26 04:23:33.642759 -0700 PDT m=+2941.310948918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-417000 -n docker-flags-417000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-417000 -n docker-flags-417000: exit status 7 (29.735292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-417000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-417000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-417000
--- FAIL: TestDockerFlags (12.76s)

                                                
                                    
x
+
TestForceSystemdFlag (10.05s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-572000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-572000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.859446417s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-572000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-572000" primary control-plane node in "force-systemd-flag-572000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-572000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:22:55.792475    4430 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:22:55.792619    4430 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:22:55.792622    4430 out.go:358] Setting ErrFile to fd 2...
	I0826 04:22:55.792625    4430 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:22:55.792747    4430 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:22:55.793798    4430 out.go:352] Setting JSON to false
	I0826 04:22:55.809955    4430 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3138,"bootTime":1724668237,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0826 04:22:55.810026    4430 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0826 04:22:55.814302    4430 out.go:177] * [force-systemd-flag-572000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0826 04:22:55.823508    4430 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 04:22:55.823574    4430 notify.go:220] Checking for updates...
	I0826 04:22:55.830536    4430 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	I0826 04:22:55.833514    4430 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0826 04:22:55.837485    4430 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 04:22:55.840478    4430 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	I0826 04:22:55.843529    4430 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 04:22:55.846749    4430 config.go:182] Loaded profile config "NoKubernetes-819000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I0826 04:22:55.846819    4430 config.go:182] Loaded profile config "multinode-143000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:22:55.846870    4430 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 04:22:55.851423    4430 out.go:177] * Using the qemu2 driver based on user configuration
	I0826 04:22:55.858380    4430 start.go:297] selected driver: qemu2
	I0826 04:22:55.858387    4430 start.go:901] validating driver "qemu2" against <nil>
	I0826 04:22:55.858393    4430 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 04:22:55.860630    4430 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0826 04:22:55.864508    4430 out.go:177] * Automatically selected the socket_vmnet network
	I0826 04:22:55.867617    4430 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0826 04:22:55.867645    4430 cni.go:84] Creating CNI manager for ""
	I0826 04:22:55.867653    4430 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0826 04:22:55.867658    4430 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0826 04:22:55.867692    4430 start.go:340] cluster config:
	{Name:force-systemd-flag-572000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-572000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 04:22:55.871227    4430 iso.go:125] acquiring lock: {Name:mk859bee1c7de58c8a10e75b01bd87b0e1e74bdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:22:55.879448    4430 out.go:177] * Starting "force-systemd-flag-572000" primary control-plane node in "force-systemd-flag-572000" cluster
	I0826 04:22:55.883302    4430 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0826 04:22:55.883322    4430 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0826 04:22:55.883336    4430 cache.go:56] Caching tarball of preloaded images
	I0826 04:22:55.883404    4430 preload.go:172] Found /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0826 04:22:55.883409    4430 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0826 04:22:55.883475    4430 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/force-systemd-flag-572000/config.json ...
	I0826 04:22:55.883487    4430 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/force-systemd-flag-572000/config.json: {Name:mkd29579d0f5db4f9c040f2dc1f64f9bf1f4b418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 04:22:55.883717    4430 start.go:360] acquireMachinesLock for force-systemd-flag-572000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:22:55.883752    4430 start.go:364] duration metric: took 27.709µs to acquireMachinesLock for "force-systemd-flag-572000"
	I0826 04:22:55.883764    4430 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-572000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0826 04:22:55.883801    4430 start.go:125] createHost starting for "" (driver="qemu2")
	I0826 04:22:55.891327    4430 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0826 04:22:55.908941    4430 start.go:159] libmachine.API.Create for "force-systemd-flag-572000" (driver="qemu2")
	I0826 04:22:55.908964    4430 client.go:168] LocalClient.Create starting
	I0826 04:22:55.909021    4430 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem
	I0826 04:22:55.909053    4430 main.go:141] libmachine: Decoding PEM data...
	I0826 04:22:55.909063    4430 main.go:141] libmachine: Parsing certificate...
	I0826 04:22:55.909108    4430 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/cert.pem
	I0826 04:22:55.909132    4430 main.go:141] libmachine: Decoding PEM data...
	I0826 04:22:55.909139    4430 main.go:141] libmachine: Parsing certificate...
	I0826 04:22:55.909498    4430 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19501-1045/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0826 04:22:56.074561    4430 main.go:141] libmachine: Creating SSH key...
	I0826 04:22:56.166289    4430 main.go:141] libmachine: Creating Disk image...
	I0826 04:22:56.166294    4430 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0826 04:22:56.166476    4430 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/force-systemd-flag-572000/disk.qcow2.raw /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/force-systemd-flag-572000/disk.qcow2
	I0826 04:22:56.175520    4430 main.go:141] libmachine: STDOUT: 
	I0826 04:22:56.175539    4430 main.go:141] libmachine: STDERR: 
	I0826 04:22:56.175584    4430 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/force-systemd-flag-572000/disk.qcow2 +20000M
	I0826 04:22:56.183433    4430 main.go:141] libmachine: STDOUT: Image resized.
	
	I0826 04:22:56.183448    4430 main.go:141] libmachine: STDERR: 
	I0826 04:22:56.183467    4430 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/force-systemd-flag-572000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/force-systemd-flag-572000/disk.qcow2
	I0826 04:22:56.183471    4430 main.go:141] libmachine: Starting QEMU VM...
	I0826 04:22:56.183484    4430 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:22:56.183508    4430 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/force-systemd-flag-572000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/force-systemd-flag-572000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/force-systemd-flag-572000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:7d:07:c5:0d:9e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/force-systemd-flag-572000/disk.qcow2
	I0826 04:22:56.185067    4430 main.go:141] libmachine: STDOUT: 
	I0826 04:22:56.185083    4430 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:22:56.185102    4430 client.go:171] duration metric: took 276.141792ms to LocalClient.Create
	I0826 04:22:58.187239    4430 start.go:128] duration metric: took 2.303473375s to createHost
	I0826 04:22:58.187288    4430 start.go:83] releasing machines lock for "force-systemd-flag-572000", held for 2.3035835s
	W0826 04:22:58.187361    4430 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:22:58.194154    4430 out.go:177] * Deleting "force-systemd-flag-572000" in qemu2 ...
	W0826 04:22:58.226769    4430 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:22:58.226789    4430 start.go:729] Will try again in 5 seconds ...
	I0826 04:23:03.228828    4430 start.go:360] acquireMachinesLock for force-systemd-flag-572000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:23:03.229226    4430 start.go:364] duration metric: took 298.334µs to acquireMachinesLock for "force-systemd-flag-572000"
	I0826 04:23:03.229329    4430 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-572000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0826 04:23:03.229652    4430 start.go:125] createHost starting for "" (driver="qemu2")
	I0826 04:23:03.235461    4430 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0826 04:23:03.285808    4430 start.go:159] libmachine.API.Create for "force-systemd-flag-572000" (driver="qemu2")
	I0826 04:23:03.285866    4430 client.go:168] LocalClient.Create starting
	I0826 04:23:03.285996    4430 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem
	I0826 04:23:03.286073    4430 main.go:141] libmachine: Decoding PEM data...
	I0826 04:23:03.286089    4430 main.go:141] libmachine: Parsing certificate...
	I0826 04:23:03.286150    4430 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/cert.pem
	I0826 04:23:03.286202    4430 main.go:141] libmachine: Decoding PEM data...
	I0826 04:23:03.286213    4430 main.go:141] libmachine: Parsing certificate...
	I0826 04:23:03.286870    4430 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19501-1045/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0826 04:23:03.458454    4430 main.go:141] libmachine: Creating SSH key...
	I0826 04:23:03.557808    4430 main.go:141] libmachine: Creating Disk image...
	I0826 04:23:03.557814    4430 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0826 04:23:03.557993    4430 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/force-systemd-flag-572000/disk.qcow2.raw /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/force-systemd-flag-572000/disk.qcow2
	I0826 04:23:03.567297    4430 main.go:141] libmachine: STDOUT: 
	I0826 04:23:03.567327    4430 main.go:141] libmachine: STDERR: 
	I0826 04:23:03.567386    4430 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/force-systemd-flag-572000/disk.qcow2 +20000M
	I0826 04:23:03.575391    4430 main.go:141] libmachine: STDOUT: Image resized.
	
	I0826 04:23:03.575407    4430 main.go:141] libmachine: STDERR: 
	I0826 04:23:03.575426    4430 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/force-systemd-flag-572000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/force-systemd-flag-572000/disk.qcow2
	I0826 04:23:03.575432    4430 main.go:141] libmachine: Starting QEMU VM...
	I0826 04:23:03.575441    4430 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:23:03.575465    4430 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/force-systemd-flag-572000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/force-systemd-flag-572000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/force-systemd-flag-572000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:37:5d:b4:b8:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/force-systemd-flag-572000/disk.qcow2
	I0826 04:23:03.577067    4430 main.go:141] libmachine: STDOUT: 
	I0826 04:23:03.577091    4430 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:23:03.577103    4430 client.go:171] duration metric: took 291.2355ms to LocalClient.Create
	I0826 04:23:05.579247    4430 start.go:128] duration metric: took 2.349587375s to createHost
	I0826 04:23:05.579318    4430 start.go:83] releasing machines lock for "force-systemd-flag-572000", held for 2.350100458s
	W0826 04:23:05.579667    4430 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-572000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-572000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:23:05.593229    4430 out.go:201] 
	W0826 04:23:05.596241    4430 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0826 04:23:05.596267    4430 out.go:270] * 
	* 
	W0826 04:23:05.599305    4430 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0826 04:23:05.607236    4430 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-572000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-572000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-572000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (76.454917ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-572000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-572000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-572000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-08-26 04:23:05.702674 -0700 PDT m=+2913.370204084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-572000 -n force-systemd-flag-572000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-572000 -n force-systemd-flag-572000: exit status 7 (33.334875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-572000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-572000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-572000
--- FAIL: TestForceSystemdFlag (10.05s)

                                                
                                    
x
+
TestForceSystemdEnv (10.15s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-727000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-727000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.890201875s)

                                                
                                                
-- stdout --
	* [force-systemd-env-727000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-727000" primary control-plane node in "force-systemd-env-727000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-727000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:23:10.876823    4506 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:23:10.876978    4506 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:23:10.876984    4506 out.go:358] Setting ErrFile to fd 2...
	I0826 04:23:10.876987    4506 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:23:10.877117    4506 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:23:10.878328    4506 out.go:352] Setting JSON to false
	I0826 04:23:10.896427    4506 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3153,"bootTime":1724668237,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0826 04:23:10.896508    4506 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0826 04:23:10.900779    4506 out.go:177] * [force-systemd-env-727000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0826 04:23:10.907762    4506 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 04:23:10.907790    4506 notify.go:220] Checking for updates...
	I0826 04:23:10.916686    4506 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	I0826 04:23:10.919767    4506 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0826 04:23:10.922705    4506 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 04:23:10.925689    4506 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	I0826 04:23:10.928758    4506 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0826 04:23:10.932024    4506 config.go:182] Loaded profile config "multinode-143000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:23:10.932076    4506 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 04:23:10.935725    4506 out.go:177] * Using the qemu2 driver based on user configuration
	I0826 04:23:10.942721    4506 start.go:297] selected driver: qemu2
	I0826 04:23:10.942736    4506 start.go:901] validating driver "qemu2" against <nil>
	I0826 04:23:10.942745    4506 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 04:23:10.945077    4506 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0826 04:23:10.947687    4506 out.go:177] * Automatically selected the socket_vmnet network
	I0826 04:23:10.950850    4506 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0826 04:23:10.950869    4506 cni.go:84] Creating CNI manager for ""
	I0826 04:23:10.950878    4506 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0826 04:23:10.950885    4506 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0826 04:23:10.950927    4506 start.go:340] cluster config:
	{Name:force-systemd-env-727000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-727000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 04:23:10.955399    4506 iso.go:125] acquiring lock: {Name:mk859bee1c7de58c8a10e75b01bd87b0e1e74bdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:23:10.962691    4506 out.go:177] * Starting "force-systemd-env-727000" primary control-plane node in "force-systemd-env-727000" cluster
	I0826 04:23:10.965586    4506 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0826 04:23:10.965618    4506 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0826 04:23:10.965625    4506 cache.go:56] Caching tarball of preloaded images
	I0826 04:23:10.965724    4506 preload.go:172] Found /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0826 04:23:10.965730    4506 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0826 04:23:10.965789    4506 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/force-systemd-env-727000/config.json ...
	I0826 04:23:10.965804    4506 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/force-systemd-env-727000/config.json: {Name:mkad55f04ff904e3bd4be00e9af87e18fefb99f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 04:23:10.966039    4506 start.go:360] acquireMachinesLock for force-systemd-env-727000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:23:10.966073    4506 start.go:364] duration metric: took 26.75µs to acquireMachinesLock for "force-systemd-env-727000"
	I0826 04:23:10.966085    4506 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-727000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-727000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0826 04:23:10.966113    4506 start.go:125] createHost starting for "" (driver="qemu2")
	I0826 04:23:10.973659    4506 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0826 04:23:10.990233    4506 start.go:159] libmachine.API.Create for "force-systemd-env-727000" (driver="qemu2")
	I0826 04:23:10.990265    4506 client.go:168] LocalClient.Create starting
	I0826 04:23:10.990332    4506 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem
	I0826 04:23:10.990361    4506 main.go:141] libmachine: Decoding PEM data...
	I0826 04:23:10.990371    4506 main.go:141] libmachine: Parsing certificate...
	I0826 04:23:10.990409    4506 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/cert.pem
	I0826 04:23:10.990437    4506 main.go:141] libmachine: Decoding PEM data...
	I0826 04:23:10.990447    4506 main.go:141] libmachine: Parsing certificate...
	I0826 04:23:10.990830    4506 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19501-1045/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0826 04:23:11.159031    4506 main.go:141] libmachine: Creating SSH key...
	I0826 04:23:11.240923    4506 main.go:141] libmachine: Creating Disk image...
	I0826 04:23:11.240937    4506 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0826 04:23:11.241133    4506 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/force-systemd-env-727000/disk.qcow2.raw /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/force-systemd-env-727000/disk.qcow2
	I0826 04:23:11.250559    4506 main.go:141] libmachine: STDOUT: 
	I0826 04:23:11.250586    4506 main.go:141] libmachine: STDERR: 
	I0826 04:23:11.250657    4506 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/force-systemd-env-727000/disk.qcow2 +20000M
	I0826 04:23:11.259324    4506 main.go:141] libmachine: STDOUT: Image resized.
	
	I0826 04:23:11.259348    4506 main.go:141] libmachine: STDERR: 
	I0826 04:23:11.259366    4506 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/force-systemd-env-727000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/force-systemd-env-727000/disk.qcow2
	I0826 04:23:11.259374    4506 main.go:141] libmachine: Starting QEMU VM...
	I0826 04:23:11.259390    4506 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:23:11.259420    4506 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/force-systemd-env-727000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/force-systemd-env-727000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/force-systemd-env-727000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:c4:8d:6e:50:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/force-systemd-env-727000/disk.qcow2
	I0826 04:23:11.261024    4506 main.go:141] libmachine: STDOUT: 
	I0826 04:23:11.261040    4506 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:23:11.261060    4506 client.go:171] duration metric: took 270.79575ms to LocalClient.Create
	I0826 04:23:13.263201    4506 start.go:128] duration metric: took 2.297121583s to createHost
	I0826 04:23:13.263310    4506 start.go:83] releasing machines lock for "force-systemd-env-727000", held for 2.297281958s
	W0826 04:23:13.263372    4506 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:23:13.273622    4506 out.go:177] * Deleting "force-systemd-env-727000" in qemu2 ...
	W0826 04:23:13.304351    4506 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:23:13.304368    4506 start.go:729] Will try again in 5 seconds ...
	I0826 04:23:18.306413    4506 start.go:360] acquireMachinesLock for force-systemd-env-727000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:23:18.306979    4506 start.go:364] duration metric: took 436.375µs to acquireMachinesLock for "force-systemd-env-727000"
	I0826 04:23:18.307131    4506 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-727000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-727000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0826 04:23:18.307402    4506 start.go:125] createHost starting for "" (driver="qemu2")
	I0826 04:23:18.317045    4506 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0826 04:23:18.368285    4506 start.go:159] libmachine.API.Create for "force-systemd-env-727000" (driver="qemu2")
	I0826 04:23:18.368334    4506 client.go:168] LocalClient.Create starting
	I0826 04:23:18.368458    4506 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem
	I0826 04:23:18.368525    4506 main.go:141] libmachine: Decoding PEM data...
	I0826 04:23:18.368547    4506 main.go:141] libmachine: Parsing certificate...
	I0826 04:23:18.368609    4506 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/cert.pem
	I0826 04:23:18.368656    4506 main.go:141] libmachine: Decoding PEM data...
	I0826 04:23:18.368676    4506 main.go:141] libmachine: Parsing certificate...
	I0826 04:23:18.369222    4506 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19501-1045/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0826 04:23:18.588618    4506 main.go:141] libmachine: Creating SSH key...
	I0826 04:23:18.673545    4506 main.go:141] libmachine: Creating Disk image...
	I0826 04:23:18.673550    4506 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0826 04:23:18.673728    4506 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/force-systemd-env-727000/disk.qcow2.raw /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/force-systemd-env-727000/disk.qcow2
	I0826 04:23:18.682730    4506 main.go:141] libmachine: STDOUT: 
	I0826 04:23:18.682750    4506 main.go:141] libmachine: STDERR: 
	I0826 04:23:18.682799    4506 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/force-systemd-env-727000/disk.qcow2 +20000M
	I0826 04:23:18.690666    4506 main.go:141] libmachine: STDOUT: Image resized.
	
	I0826 04:23:18.690681    4506 main.go:141] libmachine: STDERR: 
	I0826 04:23:18.690694    4506 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/force-systemd-env-727000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/force-systemd-env-727000/disk.qcow2
	I0826 04:23:18.690698    4506 main.go:141] libmachine: Starting QEMU VM...
	I0826 04:23:18.690713    4506 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:23:18.690750    4506 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/force-systemd-env-727000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/force-systemd-env-727000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/force-systemd-env-727000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:e4:9c:bb:90:02 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/force-systemd-env-727000/disk.qcow2
	I0826 04:23:18.692399    4506 main.go:141] libmachine: STDOUT: 
	I0826 04:23:18.692414    4506 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:23:18.692426    4506 client.go:171] duration metric: took 324.094417ms to LocalClient.Create
	I0826 04:23:20.694559    4506 start.go:128] duration metric: took 2.387181375s to createHost
	I0826 04:23:20.694633    4506 start.go:83] releasing machines lock for "force-systemd-env-727000", held for 2.387685917s
	W0826 04:23:20.694904    4506 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-727000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-727000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:23:20.708203    4506 out.go:201] 
	W0826 04:23:20.713478    4506 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0826 04:23:20.713507    4506 out.go:270] * 
	* 
	W0826 04:23:20.715642    4506 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0826 04:23:20.726401    4506 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-727000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-727000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-727000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (83.730208ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-727000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-727000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-727000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-08-26 04:23:20.820765 -0700 PDT m=+2928.488655168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-727000 -n force-systemd-env-727000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-727000 -n force-systemd-env-727000: exit status 7 (34.47125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-727000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-727000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-727000
--- FAIL: TestForceSystemdEnv (10.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (34.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-690000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-690000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-wfvnn" [f594c9bb-1962-4173-8bb5-f8c8d3ebe9e1] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-wfvnn" [f594c9bb-1962-4173-8bb5-f8c8d3ebe9e1] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 6.008554416s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.105.4:31575
functional_test.go:1661: error fetching http://192.168.105.4:31575: Get "http://192.168.105.4:31575": dial tcp 192.168.105.4:31575: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31575: Get "http://192.168.105.4:31575": dial tcp 192.168.105.4:31575: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31575: Get "http://192.168.105.4:31575": dial tcp 192.168.105.4:31575: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31575: Get "http://192.168.105.4:31575": dial tcp 192.168.105.4:31575: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31575: Get "http://192.168.105.4:31575": dial tcp 192.168.105.4:31575: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31575: Get "http://192.168.105.4:31575": dial tcp 192.168.105.4:31575: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31575: Get "http://192.168.105.4:31575": dial tcp 192.168.105.4:31575: connect: connection refused
functional_test.go:1681: failed to fetch http://192.168.105.4:31575: Get "http://192.168.105.4:31575": dial tcp 192.168.105.4:31575: connect: connection refused
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-690000 describe po hello-node-connect
functional_test.go:1606: hello-node pod describe:
Name:             hello-node-connect-65d86f57f4-wfvnn
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-690000/192.168.105.4
Start Time:       Mon, 26 Aug 2024 03:44:59 -0700
Labels:           app=hello-node-connect
pod-template-hash=65d86f57f4
Annotations:      <none>
Status:           Running
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-65d86f57f4
Containers:
echoserver-arm:
Container ID:   docker://e535073b7aa59609589166e411d5e313b766891ae38c8880a8cf6fc44681d50e
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Mon, 26 Aug 2024 03:45:16 -0700
Finished:     Mon, 26 Aug 2024 03:45:16 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-clqsq (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-clqsq:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  33s                default-scheduler  Successfully assigned default/hello-node-connect-65d86f57f4-wfvnn to functional-690000
Normal   Pulled     16s (x3 over 32s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Created    16s (x3 over 32s)  kubelet            Created container echoserver-arm
Normal   Started    16s (x3 over 32s)  kubelet            Started container echoserver-arm
Warning  BackOff    0s (x3 over 31s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-65d86f57f4-wfvnn_default(f594c9bb-1962-4173-8bb5-f8c8d3ebe9e1)

                                                
                                                
functional_test.go:1608: (dbg) Run:  kubectl --context functional-690000 logs -l app=hello-node-connect
functional_test.go:1612: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1614: (dbg) Run:  kubectl --context functional-690000 describe svc hello-node-connect
functional_test.go:1618: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.111.102.90
IPs:                      10.111.102.90
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31575/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-690000 -n functional-690000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh       | functional-690000 ssh -- ls                                                                                          | functional-690000 | jenkins | v1.33.1 | 26 Aug 24 03:45 PDT | 26 Aug 24 03:45 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-690000 ssh cat                                                                                            | functional-690000 | jenkins | v1.33.1 | 26 Aug 24 03:45 PDT | 26 Aug 24 03:45 PDT |
	|           | /mount-9p/test-1724669119820873000                                                                                   |                   |         |         |                     |                     |
	| ssh       | functional-690000 ssh stat                                                                                           | functional-690000 | jenkins | v1.33.1 | 26 Aug 24 03:45 PDT | 26 Aug 24 03:45 PDT |
	|           | /mount-9p/created-by-test                                                                                            |                   |         |         |                     |                     |
	| ssh       | functional-690000 ssh stat                                                                                           | functional-690000 | jenkins | v1.33.1 | 26 Aug 24 03:45 PDT | 26 Aug 24 03:45 PDT |
	|           | /mount-9p/created-by-pod                                                                                             |                   |         |         |                     |                     |
	| ssh       | functional-690000 ssh sudo                                                                                           | functional-690000 | jenkins | v1.33.1 | 26 Aug 24 03:45 PDT | 26 Aug 24 03:45 PDT |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-690000 ssh findmnt                                                                                        | functional-690000 | jenkins | v1.33.1 | 26 Aug 24 03:45 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-690000                                                                                                 | functional-690000 | jenkins | v1.33.1 | 26 Aug 24 03:45 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2684084787/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-690000 ssh findmnt                                                                                        | functional-690000 | jenkins | v1.33.1 | 26 Aug 24 03:45 PDT | 26 Aug 24 03:45 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-690000 ssh -- ls                                                                                          | functional-690000 | jenkins | v1.33.1 | 26 Aug 24 03:45 PDT | 26 Aug 24 03:45 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-690000 ssh sudo                                                                                           | functional-690000 | jenkins | v1.33.1 | 26 Aug 24 03:45 PDT |                     |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount     | -p functional-690000                                                                                                 | functional-690000 | jenkins | v1.33.1 | 26 Aug 24 03:45 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1704285439/001:/mount1   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-690000                                                                                                 | functional-690000 | jenkins | v1.33.1 | 26 Aug 24 03:45 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1704285439/001:/mount2   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-690000                                                                                                 | functional-690000 | jenkins | v1.33.1 | 26 Aug 24 03:45 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1704285439/001:/mount3   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-690000 ssh findmnt                                                                                        | functional-690000 | jenkins | v1.33.1 | 26 Aug 24 03:45 PDT |                     |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-690000 ssh findmnt                                                                                        | functional-690000 | jenkins | v1.33.1 | 26 Aug 24 03:45 PDT | 26 Aug 24 03:45 PDT |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-690000 ssh findmnt                                                                                        | functional-690000 | jenkins | v1.33.1 | 26 Aug 24 03:45 PDT | 26 Aug 24 03:45 PDT |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-690000 ssh findmnt                                                                                        | functional-690000 | jenkins | v1.33.1 | 26 Aug 24 03:45 PDT |                     |
	|           | -T /mount3                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-690000 ssh findmnt                                                                                        | functional-690000 | jenkins | v1.33.1 | 26 Aug 24 03:45 PDT | 26 Aug 24 03:45 PDT |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-690000 ssh findmnt                                                                                        | functional-690000 | jenkins | v1.33.1 | 26 Aug 24 03:45 PDT | 26 Aug 24 03:45 PDT |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-690000 ssh findmnt                                                                                        | functional-690000 | jenkins | v1.33.1 | 26 Aug 24 03:45 PDT | 26 Aug 24 03:45 PDT |
	|           | -T /mount3                                                                                                           |                   |         |         |                     |                     |
	| mount     | -p functional-690000                                                                                                 | functional-690000 | jenkins | v1.33.1 | 26 Aug 24 03:45 PDT |                     |
	|           | --kill=true                                                                                                          |                   |         |         |                     |                     |
	| start     | -p functional-690000                                                                                                 | functional-690000 | jenkins | v1.33.1 | 26 Aug 24 03:45 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-690000 --dry-run                                                                                       | functional-690000 | jenkins | v1.33.1 | 26 Aug 24 03:45 PDT |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-690000                                                                                                 | functional-690000 | jenkins | v1.33.1 | 26 Aug 24 03:45 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                   | functional-690000 | jenkins | v1.33.1 | 26 Aug 24 03:45 PDT |                     |
	|           | -p functional-690000                                                                                                 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/26 03:45:27
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0826 03:45:27.471261    2497 out.go:345] Setting OutFile to fd 1 ...
	I0826 03:45:27.471373    2497 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 03:45:27.471376    2497 out.go:358] Setting ErrFile to fd 2...
	I0826 03:45:27.471379    2497 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 03:45:27.471514    2497 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 03:45:27.472998    2497 out.go:352] Setting JSON to false
	I0826 03:45:27.489990    2497 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":891,"bootTime":1724668236,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0826 03:45:27.490082    2497 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0826 03:45:27.498494    2497 out.go:177] * [functional-690000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0826 03:45:27.506874    2497 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 03:45:27.506915    2497 notify.go:220] Checking for updates...
	I0826 03:45:27.514762    2497 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	I0826 03:45:27.517778    2497 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0826 03:45:27.519036    2497 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 03:45:27.521711    2497 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	I0826 03:45:27.524754    2497 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 03:45:27.528074    2497 config.go:182] Loaded profile config "functional-690000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 03:45:27.528387    2497 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 03:45:27.532747    2497 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0826 03:45:27.539740    2497 start.go:297] selected driver: qemu2
	I0826 03:45:27.539747    2497 start.go:901] validating driver "qemu2" against &{Name:functional-690000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-690000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 03:45:27.539808    2497 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 03:45:27.546758    2497 out.go:201] 
	W0826 03:45:27.550815    2497 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0826 03:45:27.554704    2497 out.go:201] 
	
	
	==> Docker <==
	Aug 26 10:45:28 functional-690000 dockerd[5681]: time="2024-08-26T10:45:28.475983443Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 26 10:45:28 functional-690000 dockerd[5681]: time="2024-08-26T10:45:28.475991111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 26 10:45:28 functional-690000 dockerd[5681]: time="2024-08-26T10:45:28.476035533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 26 10:45:28 functional-690000 dockerd[5681]: time="2024-08-26T10:45:28.475953231Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 26 10:45:28 functional-690000 dockerd[5681]: time="2024-08-26T10:45:28.475983443Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 26 10:45:28 functional-690000 dockerd[5681]: time="2024-08-26T10:45:28.475991277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 26 10:45:28 functional-690000 dockerd[5681]: time="2024-08-26T10:45:28.476030949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 26 10:45:28 functional-690000 cri-dockerd[5932]: time="2024-08-26T10:45:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ed24103f16d4bdc3a74731cbc31b14ba28b703e00cf7d4d706a633b3abce5098/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 26 10:45:28 functional-690000 cri-dockerd[5932]: time="2024-08-26T10:45:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9e1eecac9a8c8dc6ef994f469dedc191471b40da28d4245961bd846901231bc0/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 26 10:45:28 functional-690000 dockerd[5675]: time="2024-08-26T10:45:28.778307318Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Aug 26 10:45:31 functional-690000 dockerd[5681]: time="2024-08-26T10:45:31.157998935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 26 10:45:31 functional-690000 dockerd[5681]: time="2024-08-26T10:45:31.158030522Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 26 10:45:31 functional-690000 dockerd[5681]: time="2024-08-26T10:45:31.158038898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 26 10:45:31 functional-690000 dockerd[5681]: time="2024-08-26T10:45:31.158099864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 26 10:45:31 functional-690000 dockerd[5681]: time="2024-08-26T10:45:31.190168758Z" level=info msg="shim disconnected" id=4cfa1cae9c72c98ad97b03bcee6bc9df4c32e4bee2b614ae90563f6b59872734 namespace=moby
	Aug 26 10:45:31 functional-690000 dockerd[5681]: time="2024-08-26T10:45:31.190269812Z" level=warning msg="cleaning up after shim disconnected" id=4cfa1cae9c72c98ad97b03bcee6bc9df4c32e4bee2b614ae90563f6b59872734 namespace=moby
	Aug 26 10:45:31 functional-690000 dockerd[5681]: time="2024-08-26T10:45:31.190291731Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 26 10:45:31 functional-690000 dockerd[5675]: time="2024-08-26T10:45:31.190522883Z" level=info msg="ignoring event" container=4cfa1cae9c72c98ad97b03bcee6bc9df4c32e4bee2b614ae90563f6b59872734 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 26 10:45:31 functional-690000 dockerd[5681]: time="2024-08-26T10:45:31.207055472Z" level=warning msg="cleanup warnings time=\"2024-08-26T10:45:31Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Aug 26 10:45:32 functional-690000 cri-dockerd[5932]: time="2024-08-26T10:45:32Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Status: Downloaded newer image for kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Aug 26 10:45:32 functional-690000 dockerd[5681]: time="2024-08-26T10:45:32.932641401Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 26 10:45:32 functional-690000 dockerd[5681]: time="2024-08-26T10:45:32.932675988Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 26 10:45:32 functional-690000 dockerd[5681]: time="2024-08-26T10:45:32.932683948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 26 10:45:32 functional-690000 dockerd[5681]: time="2024-08-26T10:45:32.932971689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 26 10:45:32 functional-690000 dockerd[5675]: time="2024-08-26T10:45:32.997085520Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	defaefabfbd38       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93        1 second ago         Running             kubernetes-dashboard      0                   9e1eecac9a8c8       kubernetes-dashboard-695b96c756-dhqjj
	4cfa1cae9c72c       72565bf5bbedf                                                                                         2 seconds ago        Exited              echoserver-arm            3                   540bc13353123       hello-node-64b4f8f9ff-7mkv9
	f7cf6deb9ff71       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   11 seconds ago       Exited              mount-munger              0                   f4c7139e0a0db       busybox-mount
	e535073b7aa59       72565bf5bbedf                                                                                         17 seconds ago       Exited              echoserver-arm            2                   d8a3e42319d86       hello-node-connect-65d86f57f4-wfvnn
	c695b80013cd1       nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add                         20 seconds ago       Running             myfrontend                0                   93961c51c195e       sp-pod
	d2fc2c632b0ca       nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158                         41 seconds ago       Running             nginx                     0                   22a6be689f91d       nginx-svc
	4dace5916dac6       2437cf7621777                                                                                         About a minute ago   Running             coredns                   2                   4663418a3906f       coredns-6f6b679f8f-9ffv7
	9b3b278fcff09       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       2                   47e3b7d24bab8       storage-provisioner
	63abdcc2ec071       71d55d66fd4ee                                                                                         About a minute ago   Running             kube-proxy                2                   0fa90c15d7ba6       kube-proxy-pnd97
	3d70e21ae486d       fcb0683e6bdbd                                                                                         About a minute ago   Running             kube-controller-manager   2                   dc0bde5aa6124       kube-controller-manager-functional-690000
	1be7eed2d1f0f       27e3830e14027                                                                                         About a minute ago   Running             etcd                      2                   9ef9e3a1c80d7       etcd-functional-690000
	ab0d0abcb05f0       fbbbd428abb4d                                                                                         About a minute ago   Running             kube-scheduler            2                   bc52447193cb2       kube-scheduler-functional-690000
	9621165b5dc05       cd0f0ae0ec9e0                                                                                         About a minute ago   Running             kube-apiserver            0                   c08bcd95d3fbd       kube-apiserver-functional-690000
	fe97401e25fb0       2437cf7621777                                                                                         2 minutes ago        Exited              coredns                   1                   91e87a834c605       coredns-6f6b679f8f-9ffv7
	88dd7625f2605       71d55d66fd4ee                                                                                         2 minutes ago        Exited              kube-proxy                1                   706d0c3e3b25c       kube-proxy-pnd97
	c9ad7010d7d13       ba04bb24b9575                                                                                         2 minutes ago        Exited              storage-provisioner       1                   36f0a81c56e60       storage-provisioner
	bdc0e497d6a1d       fbbbd428abb4d                                                                                         2 minutes ago        Exited              kube-scheduler            1                   6a161ef6934a2       kube-scheduler-functional-690000
	3ea478f209113       fcb0683e6bdbd                                                                                         2 minutes ago        Exited              kube-controller-manager   1                   375796db1c9d6       kube-controller-manager-functional-690000
	88b7c5bb17cdb       27e3830e14027                                                                                         2 minutes ago        Exited              etcd                      1                   10f342591779c       etcd-functional-690000
	
	
	==> coredns [4dace5916dac] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:43860 - 56978 "HINFO IN 3451558860893983463.4912840189554902990. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004131455s
	[INFO] 10.244.0.1:20171 - 48208 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.00010243s
	[INFO] 10.244.0.1:13945 - 24042 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000096637s
	[INFO] 10.244.0.1:2939 - 21296 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.000983625s
	[INFO] 10.244.0.1:50324 - 39645 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000090511s
	[INFO] 10.244.0.1:31883 - 24790 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000066508s
	[INFO] 10.244.0.1:31773 - 26497 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000106472s
	
	
	==> coredns [fe97401e25fb] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:46022 - 52021 "HINFO IN 1145069948354652110.3865543397340216294. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.005127932s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-690000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-690000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff
	                    minikube.k8s.io/name=functional-690000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_26T03_43_00_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Aug 2024 10:42:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-690000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Aug 2024 10:45:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Aug 2024 10:45:18 +0000   Mon, 26 Aug 2024 10:42:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Aug 2024 10:45:18 +0000   Mon, 26 Aug 2024 10:42:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Aug 2024 10:45:18 +0000   Mon, 26 Aug 2024 10:42:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Aug 2024 10:45:18 +0000   Mon, 26 Aug 2024 10:43:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-690000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904744Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904744Ki
	  pods:               110
	System Info:
	  Machine ID:                 d48a015f6d7e437993f261d8d6b0f926
	  System UUID:                d48a015f6d7e437993f261d8d6b0f926
	  Boot ID:                    2ed41356-e361-44bd-a0ae-09dfde37adce
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-64b4f8f9ff-7mkv9                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  default                     hello-node-connect-65d86f57f4-wfvnn          0 (0%)        0 (0%)      0 (0%)           0 (0%)         34s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         21s
	  kube-system                 coredns-6f6b679f8f-9ffv7                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     2m28s
	  kube-system                 etcd-functional-690000                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         2m34s
	  kube-system                 kube-apiserver-functional-690000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 kube-controller-manager-functional-690000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m34s
	  kube-system                 kube-proxy-pnd97                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 kube-scheduler-functional-690000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m34s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-nmt9n    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-dhqjj        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m27s                kube-proxy       
	  Normal  Starting                 74s                  kube-proxy       
	  Normal  Starting                 2m                   kube-proxy       
	  Normal  Starting                 2m38s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m38s                kubelet          Node functional-690000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m38s                kubelet          Node functional-690000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m38s                kubelet          Node functional-690000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m38s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     2m34s                kubelet          Node functional-690000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m34s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m34s                kubelet          Node functional-690000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m34s                kubelet          Node functional-690000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m34s                kubelet          Starting kubelet.
	  Normal  NodeReady                2m30s                kubelet          Node functional-690000 status is now: NodeReady
	  Normal  RegisteredNode           2m29s                node-controller  Node functional-690000 event: Registered Node functional-690000 in Controller
	  Normal  NodeHasNoDiskPressure    2m4s (x8 over 2m4s)  kubelet          Node functional-690000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m4s (x8 over 2m4s)  kubelet          Node functional-690000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m4s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m4s (x7 over 2m4s)  kubelet          Node functional-690000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           119s                 node-controller  Node functional-690000 event: Registered Node functional-690000 in Controller
	  Normal  Starting                 80s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  79s (x8 over 79s)    kubelet          Node functional-690000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    79s (x8 over 79s)    kubelet          Node functional-690000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     79s (x7 over 79s)    kubelet          Node functional-690000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  79s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           73s                  node-controller  Node functional-690000 event: Registered Node functional-690000 in Controller
	
	
	==> dmesg <==
	[  +8.876930] kauditd_printk_skb: 33 callbacks suppressed
	[  +7.738984] systemd-fstab-generator[4766]: Ignoring "noauto" option for root device
	[ +10.434596] systemd-fstab-generator[5200]: Ignoring "noauto" option for root device
	[  +0.051970] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.090208] systemd-fstab-generator[5234]: Ignoring "noauto" option for root device
	[  +0.097695] systemd-fstab-generator[5247]: Ignoring "noauto" option for root device
	[  +0.091844] systemd-fstab-generator[5262]: Ignoring "noauto" option for root device
	[Aug26 10:44] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.422895] systemd-fstab-generator[5881]: Ignoring "noauto" option for root device
	[  +0.067520] systemd-fstab-generator[5893]: Ignoring "noauto" option for root device
	[  +0.071118] systemd-fstab-generator[5905]: Ignoring "noauto" option for root device
	[  +0.073468] systemd-fstab-generator[5920]: Ignoring "noauto" option for root device
	[  +0.215909] systemd-fstab-generator[6093]: Ignoring "noauto" option for root device
	[  +1.175568] systemd-fstab-generator[6216]: Ignoring "noauto" option for root device
	[  +1.085942] kauditd_printk_skb: 189 callbacks suppressed
	[  +6.076703] kauditd_printk_skb: 41 callbacks suppressed
	[ +12.932628] systemd-fstab-generator[7255]: Ignoring "noauto" option for root device
	[  +5.068356] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.099330] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.102862] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.026301] kauditd_printk_skb: 13 callbacks suppressed
	[Aug26 10:45] kauditd_printk_skb: 38 callbacks suppressed
	[ +15.599942] kauditd_printk_skb: 21 callbacks suppressed
	[  +7.265995] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.056359] kauditd_printk_skb: 31 callbacks suppressed
	
	
	==> etcd [1be7eed2d1f0] <==
	{"level":"info","ts":"2024-08-26T10:44:15.021381Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-26T10:44:15.026674Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-26T10:44:15.027372Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-26T10:44:15.027490Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-26T10:44:15.027528Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-26T10:44:15.027600Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-26T10:44:15.027620Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-26T10:44:16.812699Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-26T10:44:16.812844Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-26T10:44:16.812926Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-08-26T10:44:16.813123Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-08-26T10:44:16.813484Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-08-26T10:44:16.813513Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-08-26T10:44:16.813537Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-08-26T10:44:16.816406Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-26T10:44:16.816701Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-26T10:44:16.816411Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-690000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-26T10:44:16.817318Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-26T10:44:16.817497Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-26T10:44:16.818901Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-26T10:44:16.818901Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-26T10:44:16.821082Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-26T10:44:16.821212Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"warn","ts":"2024-08-26T10:45:04.192545Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.229057ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1117"}
	{"level":"info","ts":"2024-08-26T10:45:04.192609Z","caller":"traceutil/trace.go:171","msg":"trace[589087771] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:747; }","duration":"136.303567ms","start":"2024-08-26T10:45:04.056296Z","end":"2024-08-26T10:45:04.192599Z","steps":["trace[589087771] 'range keys from in-memory index tree'  (duration: 136.134462ms)"],"step_count":1}
	
	
	==> etcd [88b7c5bb17cd] <==
	{"level":"info","ts":"2024-08-26T10:43:30.698494Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-26T10:43:30.698560Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-08-26T10:43:30.698599Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-08-26T10:43:30.698659Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-08-26T10:43:30.698690Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-08-26T10:43:30.698718Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-08-26T10:43:30.701700Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-690000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-26T10:43:30.701773Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-26T10:43:30.702546Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-26T10:43:30.702586Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-26T10:43:30.703008Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-26T10:43:30.704380Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-26T10:43:30.704786Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-26T10:43:30.706821Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-26T10:43:30.707289Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-08-26T10:43:59.827567Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-26T10:43:59.827597Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-690000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-08-26T10:43:59.827652Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-26T10:43:59.827670Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-26T10:43:59.827719Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-26T10:43:59.827759Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-26T10:43:59.841878Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-08-26T10:43:59.846812Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-26T10:43:59.846870Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-26T10:43:59.846875Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-690000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> kernel <==
	 10:45:33 up 2 min,  0 users,  load average: 0.93, 0.47, 0.19
	Linux functional-690000 5.10.207 #1 SMP PREEMPT Thu Aug 15 18:35:44 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [9621165b5dc0] <==
	I0826 10:44:17.434167       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0826 10:44:17.434237       1 aggregator.go:171] initial CRD sync complete...
	I0826 10:44:17.434248       1 autoregister_controller.go:144] Starting autoregister controller
	I0826 10:44:17.434251       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0826 10:44:17.434254       1 cache.go:39] Caches are synced for autoregister controller
	I0826 10:44:17.435312       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0826 10:44:17.450982       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0826 10:44:17.450993       1 policy_source.go:224] refreshing policies
	I0826 10:44:17.481343       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0826 10:44:18.324575       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0826 10:44:18.670247       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0826 10:44:18.673944       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0826 10:44:18.684680       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0826 10:44:18.691591       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0826 10:44:18.693621       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0826 10:44:20.945074       1 controller.go:615] quota admission added evaluator for: endpoints
	I0826 10:44:21.048218       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0826 10:44:38.683120       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.110.238.213"}
	I0826 10:44:43.793782       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0826 10:44:43.835944       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.97.210.78"}
	I0826 10:44:49.254770       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.107.81.108"}
	I0826 10:44:59.700013       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.111.102.90"}
	I0826 10:45:28.086710       1 controller.go:615] quota admission added evaluator for: namespaces
	I0826 10:45:28.166764       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.96.42"}
	I0826 10:45:28.180724       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.144.71"}
	
	
	==> kube-controller-manager [3d70e21ae486] <==
	I0826 10:45:04.741647       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="23.42µs"
	I0826 10:45:16.031962       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="51.006µs"
	I0826 10:45:16.931450       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="64.174µs"
	I0826 10:45:18.516246       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-690000"
	I0826 10:45:20.013946       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="22.253µs"
	I0826 10:45:28.113914       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="9.280258ms"
	E0826 10:45:28.114226       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0826 10:45:28.122715       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="8.920174ms"
	E0826 10:45:28.122734       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0826 10:45:28.122723       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="7.114336ms"
	E0826 10:45:28.122778       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0826 10:45:28.126836       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="2.696317ms"
	E0826 10:45:28.126897       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0826 10:45:28.126916       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="2.835375ms"
	E0826 10:45:28.126921       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0826 10:45:28.140475       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="10.992043ms"
	I0826 10:45:28.151428       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="10.929702ms"
	I0826 10:45:28.151473       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="10.793µs"
	I0826 10:45:28.151551       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="16.704047ms"
	I0826 10:45:28.160358       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="8.770698ms"
	I0826 10:45:28.160488       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="28.878µs"
	I0826 10:45:32.010604       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="25.836µs"
	I0826 10:45:32.199206       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="27.336µs"
	I0826 10:45:33.221881       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="10.998363ms"
	I0826 10:45:33.222481       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="24.836µs"
	
	
	==> kube-controller-manager [3ea478f20911] <==
	I0826 10:43:34.583033       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0826 10:43:34.583075       1 shared_informer.go:320] Caches are synced for TTL
	I0826 10:43:34.583101       1 shared_informer.go:320] Caches are synced for HPA
	I0826 10:43:34.584832       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0826 10:43:34.585769       1 shared_informer.go:320] Caches are synced for node
	I0826 10:43:34.585790       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0826 10:43:34.585811       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0826 10:43:34.585817       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0826 10:43:34.585819       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0826 10:43:34.585859       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-690000"
	I0826 10:43:34.671086       1 shared_informer.go:320] Caches are synced for deployment
	I0826 10:43:34.681217       1 shared_informer.go:320] Caches are synced for disruption
	I0826 10:43:34.682436       1 shared_informer.go:320] Caches are synced for persistent volume
	I0826 10:43:34.736268       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="198.404272ms"
	I0826 10:43:34.736361       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="40.85µs"
	I0826 10:43:34.741790       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0826 10:43:34.758633       1 shared_informer.go:320] Caches are synced for resource quota
	I0826 10:43:34.781735       1 shared_informer.go:320] Caches are synced for endpoint
	I0826 10:43:34.782887       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0826 10:43:34.785934       1 shared_informer.go:320] Caches are synced for resource quota
	I0826 10:43:35.201495       1 shared_informer.go:320] Caches are synced for garbage collector
	I0826 10:43:35.282494       1 shared_informer.go:320] Caches are synced for garbage collector
	I0826 10:43:35.282510       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0826 10:43:41.236656       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="12.814906ms"
	I0826 10:43:41.237315       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="363.139µs"
	
	
	==> kube-proxy [63abdcc2ec07] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0826 10:44:18.554248       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0826 10:44:18.557663       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0826 10:44:18.557690       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0826 10:44:18.614253       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0826 10:44:18.614270       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0826 10:44:18.614282       1 server_linux.go:169] "Using iptables Proxier"
	I0826 10:44:18.614968       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0826 10:44:18.615124       1 server.go:483] "Version info" version="v1.31.0"
	I0826 10:44:18.615134       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0826 10:44:18.615646       1 config.go:197] "Starting service config controller"
	I0826 10:44:18.615660       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0826 10:44:18.615676       1 config.go:104] "Starting endpoint slice config controller"
	I0826 10:44:18.615695       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0826 10:44:18.615912       1 config.go:326] "Starting node config controller"
	I0826 10:44:18.616065       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0826 10:44:18.716669       1 shared_informer.go:320] Caches are synced for node config
	I0826 10:44:18.716669       1 shared_informer.go:320] Caches are synced for service config
	I0826 10:44:18.716683       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [88dd7625f260] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0826 10:43:32.587675       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0826 10:43:32.593274       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0826 10:43:32.593416       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0826 10:43:32.622266       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0826 10:43:32.622290       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0826 10:43:32.622307       1 server_linux.go:169] "Using iptables Proxier"
	I0826 10:43:32.623403       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0826 10:43:32.623534       1 server.go:483] "Version info" version="v1.31.0"
	I0826 10:43:32.623544       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0826 10:43:32.624146       1 config.go:197] "Starting service config controller"
	I0826 10:43:32.624152       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0826 10:43:32.624157       1 config.go:104] "Starting endpoint slice config controller"
	I0826 10:43:32.624159       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0826 10:43:32.624272       1 config.go:326] "Starting node config controller"
	I0826 10:43:32.624275       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0826 10:43:32.724586       1 shared_informer.go:320] Caches are synced for service config
	I0826 10:43:32.724586       1 shared_informer.go:320] Caches are synced for node config
	I0826 10:43:32.724596       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [ab0d0abcb05f] <==
	I0826 10:44:15.301239       1 serving.go:386] Generated self-signed cert in-memory
	W0826 10:44:17.329549       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0826 10:44:17.329565       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0826 10:44:17.329570       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0826 10:44:17.329573       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0826 10:44:17.388720       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0826 10:44:17.388823       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0826 10:44:17.389772       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0826 10:44:17.389838       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0826 10:44:17.389846       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0826 10:44:17.389900       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0826 10:44:17.490954       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [bdc0e497d6a1] <==
	I0826 10:43:30.061432       1 serving.go:386] Generated self-signed cert in-memory
	W0826 10:43:31.231027       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0826 10:43:31.231082       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0826 10:43:31.231102       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0826 10:43:31.231120       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0826 10:43:31.236366       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0826 10:43:31.236378       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0826 10:43:31.237691       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0826 10:43:31.238081       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0826 10:43:31.238443       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0826 10:43:31.238117       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0826 10:43:31.340064       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0826 10:43:59.831995       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0826 10:43:59.832090       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0826 10:43:59.832154       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 26 10:45:16 functional-690000 kubelet[6223]: E0826 10:45:16.920787    6223 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-wfvnn_default(f594c9bb-1962-4173-8bb5-f8c8d3ebe9e1)\"" pod="default/hello-node-connect-65d86f57f4-wfvnn" podUID="f594c9bb-1962-4173-8bb5-f8c8d3ebe9e1"
	Aug 26 10:45:20 functional-690000 kubelet[6223]: I0826 10:45:20.003031    6223 scope.go:117] "RemoveContainer" containerID="37bcf0a3ad2c29a7f87ee3fdf5b4bd1ea8e839b028d3508a8255ec59ce4da105"
	Aug 26 10:45:20 functional-690000 kubelet[6223]: E0826 10:45:20.003119    6223 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-7mkv9_default(812b093a-8e28-4d57-85e9-b90187f90d64)\"" pod="default/hello-node-64b4f8f9ff-7mkv9" podUID="812b093a-8e28-4d57-85e9-b90187f90d64"
	Aug 26 10:45:20 functional-690000 kubelet[6223]: I0826 10:45:20.589195    6223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/f0646188-5b5b-432d-a1b9-676ca37f2128-test-volume\") pod \"busybox-mount\" (UID: \"f0646188-5b5b-432d-a1b9-676ca37f2128\") " pod="default/busybox-mount"
	Aug 26 10:45:20 functional-690000 kubelet[6223]: I0826 10:45:20.589224    6223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ghxh\" (UniqueName: \"kubernetes.io/projected/f0646188-5b5b-432d-a1b9-676ca37f2128-kube-api-access-7ghxh\") pod \"busybox-mount\" (UID: \"f0646188-5b5b-432d-a1b9-676ca37f2128\") " pod="default/busybox-mount"
	Aug 26 10:45:24 functional-690000 kubelet[6223]: I0826 10:45:24.224192    6223 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ghxh\" (UniqueName: \"kubernetes.io/projected/f0646188-5b5b-432d-a1b9-676ca37f2128-kube-api-access-7ghxh\") pod \"f0646188-5b5b-432d-a1b9-676ca37f2128\" (UID: \"f0646188-5b5b-432d-a1b9-676ca37f2128\") "
	Aug 26 10:45:24 functional-690000 kubelet[6223]: I0826 10:45:24.224365    6223 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/f0646188-5b5b-432d-a1b9-676ca37f2128-test-volume\") pod \"f0646188-5b5b-432d-a1b9-676ca37f2128\" (UID: \"f0646188-5b5b-432d-a1b9-676ca37f2128\") "
	Aug 26 10:45:24 functional-690000 kubelet[6223]: I0826 10:45:24.224406    6223 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0646188-5b5b-432d-a1b9-676ca37f2128-test-volume" (OuterVolumeSpecName: "test-volume") pod "f0646188-5b5b-432d-a1b9-676ca37f2128" (UID: "f0646188-5b5b-432d-a1b9-676ca37f2128"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Aug 26 10:45:24 functional-690000 kubelet[6223]: I0826 10:45:24.224990    6223 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0646188-5b5b-432d-a1b9-676ca37f2128-kube-api-access-7ghxh" (OuterVolumeSpecName: "kube-api-access-7ghxh") pod "f0646188-5b5b-432d-a1b9-676ca37f2128" (UID: "f0646188-5b5b-432d-a1b9-676ca37f2128"). InnerVolumeSpecName "kube-api-access-7ghxh". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 26 10:45:24 functional-690000 kubelet[6223]: I0826 10:45:24.327339    6223 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-7ghxh\" (UniqueName: \"kubernetes.io/projected/f0646188-5b5b-432d-a1b9-676ca37f2128-kube-api-access-7ghxh\") on node \"functional-690000\" DevicePath \"\""
	Aug 26 10:45:24 functional-690000 kubelet[6223]: I0826 10:45:24.327361    6223 reconciler_common.go:288] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/f0646188-5b5b-432d-a1b9-676ca37f2128-test-volume\") on node \"functional-690000\" DevicePath \"\""
	Aug 26 10:45:25 functional-690000 kubelet[6223]: I0826 10:45:25.049718    6223 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4c7139e0a0db09dec63651547c9bea58d68300acf13b8e3c182d96521afe155"
	Aug 26 10:45:28 functional-690000 kubelet[6223]: E0826 10:45:28.137543    6223 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f0646188-5b5b-432d-a1b9-676ca37f2128" containerName="mount-munger"
	Aug 26 10:45:28 functional-690000 kubelet[6223]: I0826 10:45:28.137570    6223 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0646188-5b5b-432d-a1b9-676ca37f2128" containerName="mount-munger"
	Aug 26 10:45:28 functional-690000 kubelet[6223]: I0826 10:45:28.257501    6223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/52df9a18-317a-45ef-a482-1e796de32665-tmp-volume\") pod \"dashboard-metrics-scraper-c5db448b4-nmt9n\" (UID: \"52df9a18-317a-45ef-a482-1e796de32665\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-nmt9n"
	Aug 26 10:45:28 functional-690000 kubelet[6223]: I0826 10:45:28.257527    6223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/49f25167-4bde-485f-9032-84b59a914d22-tmp-volume\") pod \"kubernetes-dashboard-695b96c756-dhqjj\" (UID: \"49f25167-4bde-485f-9032-84b59a914d22\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-dhqjj"
	Aug 26 10:45:28 functional-690000 kubelet[6223]: I0826 10:45:28.257539    6223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pmb8\" (UniqueName: \"kubernetes.io/projected/49f25167-4bde-485f-9032-84b59a914d22-kube-api-access-8pmb8\") pod \"kubernetes-dashboard-695b96c756-dhqjj\" (UID: \"49f25167-4bde-485f-9032-84b59a914d22\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-dhqjj"
	Aug 26 10:45:28 functional-690000 kubelet[6223]: I0826 10:45:28.257549    6223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nz7d\" (UniqueName: \"kubernetes.io/projected/52df9a18-317a-45ef-a482-1e796de32665-kube-api-access-5nz7d\") pod \"dashboard-metrics-scraper-c5db448b4-nmt9n\" (UID: \"52df9a18-317a-45ef-a482-1e796de32665\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-nmt9n"
	Aug 26 10:45:31 functional-690000 kubelet[6223]: I0826 10:45:31.003165    6223 scope.go:117] "RemoveContainer" containerID="37bcf0a3ad2c29a7f87ee3fdf5b4bd1ea8e839b028d3508a8255ec59ce4da105"
	Aug 26 10:45:32 functional-690000 kubelet[6223]: I0826 10:45:32.003821    6223 scope.go:117] "RemoveContainer" containerID="e535073b7aa59609589166e411d5e313b766891ae38c8880a8cf6fc44681d50e"
	Aug 26 10:45:32 functional-690000 kubelet[6223]: E0826 10:45:32.003902    6223 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-wfvnn_default(f594c9bb-1962-4173-8bb5-f8c8d3ebe9e1)\"" pod="default/hello-node-connect-65d86f57f4-wfvnn" podUID="f594c9bb-1962-4173-8bb5-f8c8d3ebe9e1"
	Aug 26 10:45:32 functional-690000 kubelet[6223]: I0826 10:45:32.192994    6223 scope.go:117] "RemoveContainer" containerID="37bcf0a3ad2c29a7f87ee3fdf5b4bd1ea8e839b028d3508a8255ec59ce4da105"
	Aug 26 10:45:32 functional-690000 kubelet[6223]: I0826 10:45:32.193162    6223 scope.go:117] "RemoveContainer" containerID="4cfa1cae9c72c98ad97b03bcee6bc9df4c32e4bee2b614ae90563f6b59872734"
	Aug 26 10:45:32 functional-690000 kubelet[6223]: E0826 10:45:32.193227    6223 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 40s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-7mkv9_default(812b093a-8e28-4d57-85e9-b90187f90d64)\"" pod="default/hello-node-64b4f8f9ff-7mkv9" podUID="812b093a-8e28-4d57-85e9-b90187f90d64"
	Aug 26 10:45:33 functional-690000 kubelet[6223]: I0826 10:45:33.214409    6223 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-dhqjj" podStartSLOduration=0.98697593 podStartE2EDuration="5.214396879s" podCreationTimestamp="2024-08-26 10:45:28 +0000 UTC" firstStartedPulling="2024-08-26 10:45:28.55615591 +0000 UTC m=+74.619596686" lastFinishedPulling="2024-08-26 10:45:32.783576817 +0000 UTC m=+78.847017635" observedRunningTime="2024-08-26 10:45:33.214258655 +0000 UTC m=+79.277699431" watchObservedRunningTime="2024-08-26 10:45:33.214396879 +0000 UTC m=+79.277837655"
	
	
	==> kubernetes-dashboard [defaefabfbd3] <==
	2024/08/26 10:45:32 Starting overwatch
	2024/08/26 10:45:32 Using namespace: kubernetes-dashboard
	2024/08/26 10:45:32 Using in-cluster config to connect to apiserver
	2024/08/26 10:45:32 Using secret token for csrf signing
	2024/08/26 10:45:32 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/08/26 10:45:32 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/08/26 10:45:32 Successful initial request to the apiserver, version: v1.31.0
	2024/08/26 10:45:32 Generating JWE encryption key
	2024/08/26 10:45:32 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/08/26 10:45:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/08/26 10:45:33 Initializing JWE encryption key from synchronized object
	2024/08/26 10:45:33 Creating in-cluster Sidecar client
	2024/08/26 10:45:33 Serving insecurely on HTTP port: 9090
	2024/08/26 10:45:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [9b3b278fcff0] <==
	I0826 10:44:18.516321       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0826 10:44:18.520936       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0826 10:44:18.520952       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0826 10:44:35.944221       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0826 10:44:35.945593       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"83c399b9-da3c-4ad2-876d-c217706c79a7", APIVersion:"v1", ResourceVersion:"601", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-690000_999660ce-aea2-4187-b79f-08cf94a1e0b1 became leader
	I0826 10:44:35.945779       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-690000_999660ce-aea2-4187-b79f-08cf94a1e0b1!
	I0826 10:44:36.046620       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-690000_999660ce-aea2-4187-b79f-08cf94a1e0b1!
	I0826 10:45:00.466262       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0826 10:45:00.466404       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    7356e999-f105-4721-96b4-3b7c755fd951 317 0 2024-08-26 10:43:05 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-08-26 10:43:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-2d75c673-27ce-4b46-ae3a-d7d07fb7cc4f &PersistentVolumeClaim{ObjectMeta:{myclaim  default  2d75c673-27ce-4b46-ae3a-d7d07fb7cc4f 725 0 2024-08-26 10:45:00 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-08-26 10:45:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-08-26 10:45:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0826 10:45:00.466856       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-2d75c673-27ce-4b46-ae3a-d7d07fb7cc4f" provisioned
	I0826 10:45:00.466886       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0826 10:45:00.466905       1 volume_store.go:212] Trying to save persistentvolume "pvc-2d75c673-27ce-4b46-ae3a-d7d07fb7cc4f"
	I0826 10:45:00.467578       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"2d75c673-27ce-4b46-ae3a-d7d07fb7cc4f", APIVersion:"v1", ResourceVersion:"725", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0826 10:45:00.471727       1 volume_store.go:219] persistentvolume "pvc-2d75c673-27ce-4b46-ae3a-d7d07fb7cc4f" saved
	I0826 10:45:00.471984       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"2d75c673-27ce-4b46-ae3a-d7d07fb7cc4f", APIVersion:"v1", ResourceVersion:"725", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-2d75c673-27ce-4b46-ae3a-d7d07fb7cc4f
	
	
	==> storage-provisioner [c9ad7010d7d1] <==
	I0826 10:43:32.577594       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0826 10:43:32.593814       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0826 10:43:32.593833       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0826 10:43:32.606978       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0826 10:43:32.607183       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-690000_93dc800d-156e-4d12-8db4-5007341d3e3c!
	I0826 10:43:32.607644       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"83c399b9-da3c-4ad2-876d-c217706c79a7", APIVersion:"v1", ResourceVersion:"462", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-690000_93dc800d-156e-4d12-8db4-5007341d3e3c became leader
	I0826 10:43:32.708256       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-690000_93dc800d-156e-4d12-8db4-5007341d3e3c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-690000 -n functional-690000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-690000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount dashboard-metrics-scraper-c5db448b4-nmt9n
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-690000 describe pod busybox-mount dashboard-metrics-scraper-c5db448b4-nmt9n
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-690000 describe pod busybox-mount dashboard-metrics-scraper-c5db448b4-nmt9n: exit status 1 (39.555292ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-690000/192.168.105.4
	Start Time:       Mon, 26 Aug 2024 03:45:20 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://f7cf6deb9ff712ab254443708aee919cceacd6309d58293bd812407b8828dba8
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 26 Aug 2024 03:45:22 -0700
	      Finished:     Mon, 26 Aug 2024 03:45:22 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7ghxh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-7ghxh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  13s   default-scheduler  Successfully assigned default/busybox-mount to functional-690000
	  Normal  Pulling    13s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     11s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.264s (1.264s including waiting). Image size: 3547125 bytes.
	  Normal  Created    11s   kubelet            Created container mount-munger
	  Normal  Started    11s   kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-c5db448b4-nmt9n" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-690000 describe pod busybox-mount dashboard-metrics-scraper-c5db448b4-nmt9n: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (34.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (214.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 node stop m02 -v=7 --alsologtostderr
E0826 03:49:43.706911    1539 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/functional-690000/client.crt: no such file or directory" logger="UnhandledError"
E0826 03:49:43.714522    1539 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/functional-690000/client.crt: no such file or directory" logger="UnhandledError"
E0826 03:49:43.727359    1539 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/functional-690000/client.crt: no such file or directory" logger="UnhandledError"
E0826 03:49:43.750718    1539 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/functional-690000/client.crt: no such file or directory" logger="UnhandledError"
E0826 03:49:43.794078    1539 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/functional-690000/client.crt: no such file or directory" logger="UnhandledError"
E0826 03:49:43.875727    1539 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/functional-690000/client.crt: no such file or directory" logger="UnhandledError"
E0826 03:49:44.038734    1539 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/functional-690000/client.crt: no such file or directory" logger="UnhandledError"
E0826 03:49:44.362107    1539 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/functional-690000/client.crt: no such file or directory" logger="UnhandledError"
E0826 03:49:45.005551    1539 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/functional-690000/client.crt: no such file or directory" logger="UnhandledError"
E0826 03:49:46.288928    1539 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/functional-690000/client.crt: no such file or directory" logger="UnhandledError"
E0826 03:49:48.851218    1539 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/functional-690000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-139000 node stop m02 -v=7 --alsologtostderr: (12.164582709s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 status -v=7 --alsologtostderr
E0826 03:49:53.974464    1539 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/functional-690000/client.crt: no such file or directory" logger="UnhandledError"
E0826 03:50:04.217083    1539 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/functional-690000/client.crt: no such file or directory" logger="UnhandledError"
E0826 03:50:24.699883    1539 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/functional-690000/client.crt: no such file or directory" logger="UnhandledError"
E0826 03:51:05.661694    1539 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/functional-690000/client.crt: no such file or directory" logger="UnhandledError"
E0826 03:52:27.683348    1539 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/functional-690000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-139000 status -v=7 --alsologtostderr: exit status 7 (2m55.972816166s)

                                                
                                                
-- stdout --
	ha-139000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-139000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-139000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-139000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 03:49:53.415774    2805 out.go:345] Setting OutFile to fd 1 ...
	I0826 03:49:53.415931    2805 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 03:49:53.415935    2805 out.go:358] Setting ErrFile to fd 2...
	I0826 03:49:53.415937    2805 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 03:49:53.416054    2805 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 03:49:53.416167    2805 out.go:352] Setting JSON to false
	I0826 03:49:53.416180    2805 mustload.go:65] Loading cluster: ha-139000
	I0826 03:49:53.416256    2805 notify.go:220] Checking for updates...
	I0826 03:49:53.416428    2805 config.go:182] Loaded profile config "ha-139000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 03:49:53.416445    2805 status.go:255] checking status of ha-139000 ...
	I0826 03:49:53.417379    2805 status.go:330] ha-139000 host status = "Running" (err=<nil>)
	I0826 03:49:53.417398    2805 host.go:66] Checking if "ha-139000" exists ...
	I0826 03:49:53.417536    2805 host.go:66] Checking if "ha-139000" exists ...
	I0826 03:49:53.417646    2805 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0826 03:49:53.417660    2805 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/ha-139000/id_rsa Username:docker}
	W0826 03:50:19.343255    2805 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0826 03:50:19.343360    2805 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0826 03:50:19.343373    2805 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0826 03:50:19.343378    2805 status.go:257] ha-139000 status: &{Name:ha-139000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0826 03:50:19.343395    2805 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0826 03:50:19.343399    2805 status.go:255] checking status of ha-139000-m02 ...
	I0826 03:50:19.343627    2805 status.go:330] ha-139000-m02 host status = "Stopped" (err=<nil>)
	I0826 03:50:19.343633    2805 status.go:343] host is not running, skipping remaining checks
	I0826 03:50:19.343635    2805 status.go:257] ha-139000-m02 status: &{Name:ha-139000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0826 03:50:19.343640    2805 status.go:255] checking status of ha-139000-m03 ...
	I0826 03:50:19.344329    2805 status.go:330] ha-139000-m03 host status = "Running" (err=<nil>)
	I0826 03:50:19.344338    2805 host.go:66] Checking if "ha-139000-m03" exists ...
	I0826 03:50:19.344483    2805 host.go:66] Checking if "ha-139000-m03" exists ...
	I0826 03:50:19.344606    2805 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0826 03:50:19.344614    2805 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/ha-139000-m03/id_rsa Username:docker}
	W0826 03:51:34.343875    2805 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0826 03:51:34.343917    2805 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0826 03:51:34.343925    2805 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0826 03:51:34.343929    2805 status.go:257] ha-139000-m03 status: &{Name:ha-139000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0826 03:51:34.343938    2805 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0826 03:51:34.343943    2805 status.go:255] checking status of ha-139000-m04 ...
	I0826 03:51:34.344651    2805 status.go:330] ha-139000-m04 host status = "Running" (err=<nil>)
	I0826 03:51:34.344658    2805 host.go:66] Checking if "ha-139000-m04" exists ...
	I0826 03:51:34.344767    2805 host.go:66] Checking if "ha-139000-m04" exists ...
	I0826 03:51:34.344879    2805 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0826 03:51:34.344886    2805 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/ha-139000-m04/id_rsa Username:docker}
	W0826 03:52:49.444637    2805 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0826 03:52:49.444684    2805 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0826 03:52:49.444691    2805 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0826 03:52:49.444695    2805 status.go:257] ha-139000-m04 status: &{Name:ha-139000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0826 03:52:49.444703    2805 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-139000 status -v=7 --alsologtostderr": ha-139000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-139000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-139000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-139000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-139000 status -v=7 --alsologtostderr": ha-139000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-139000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-139000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-139000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-139000 status -v=7 --alsologtostderr": ha-139000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-139000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-139000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-139000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-139000 -n ha-139000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-139000 -n ha-139000: exit status 3 (25.9587575s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0826 03:53:15.403362    2846 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0826 03:53:15.403378    2846 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-139000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (214.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (104.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0826 03:53:21.675154    1539 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/addons-293000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m18.371623458s)
ha_test.go:413: expected profile "ha-139000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-139000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-139000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-139000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docke
r\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-139000 -n ha-139000
E0826 03:54:43.799519    1539 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/functional-690000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-139000 -n ha-139000: exit status 3 (25.961103875s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0826 03:54:59.730752    2873 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0826 03:54:59.730790    2873 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-139000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (104.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (208.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-139000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.129652333s)

                                                
                                                
-- stdout --
	* Starting "ha-139000-m02" control-plane node in "ha-139000" cluster
	* Restarting existing qemu2 VM for "ha-139000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-139000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 03:54:59.798720    2885 out.go:345] Setting OutFile to fd 1 ...
	I0826 03:54:59.799025    2885 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 03:54:59.799029    2885 out.go:358] Setting ErrFile to fd 2...
	I0826 03:54:59.799036    2885 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 03:54:59.799198    2885 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 03:54:59.799519    2885 mustload.go:65] Loading cluster: ha-139000
	I0826 03:54:59.799825    2885 config.go:182] Loaded profile config "ha-139000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	W0826 03:54:59.800129    2885 host.go:58] "ha-139000-m02" host status: Stopped
	I0826 03:54:59.804655    2885 out.go:177] * Starting "ha-139000-m02" control-plane node in "ha-139000" cluster
	I0826 03:54:59.808444    2885 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0826 03:54:59.808458    2885 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0826 03:54:59.808469    2885 cache.go:56] Caching tarball of preloaded images
	I0826 03:54:59.808562    2885 preload.go:172] Found /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0826 03:54:59.808570    2885 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0826 03:54:59.808644    2885 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/ha-139000/config.json ...
	I0826 03:54:59.809044    2885 start.go:360] acquireMachinesLock for ha-139000-m02: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 03:54:59.809095    2885 start.go:364] duration metric: took 36.25µs to acquireMachinesLock for "ha-139000-m02"
	I0826 03:54:59.809105    2885 start.go:96] Skipping create...Using existing machine configuration
	I0826 03:54:59.809114    2885 fix.go:54] fixHost starting: m02
	I0826 03:54:59.809269    2885 fix.go:112] recreateIfNeeded on ha-139000-m02: state=Stopped err=<nil>
	W0826 03:54:59.809276    2885 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 03:54:59.813538    2885 out.go:177] * Restarting existing qemu2 VM for "ha-139000-m02" ...
	I0826 03:54:59.816484    2885 qemu.go:418] Using hvf for hardware acceleration
	I0826 03:54:59.816535    2885 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/ha-139000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/ha-139000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/ha-139000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:2f:9d:eb:95:0f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/ha-139000-m02/disk.qcow2
	I0826 03:54:59.819322    2885 main.go:141] libmachine: STDOUT: 
	I0826 03:54:59.819345    2885 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 03:54:59.819370    2885 fix.go:56] duration metric: took 10.256792ms for fixHost
	I0826 03:54:59.819375    2885 start.go:83] releasing machines lock for "ha-139000-m02", held for 10.274083ms
	W0826 03:54:59.819383    2885 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0826 03:54:59.819423    2885 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 03:54:59.819427    2885 start.go:729] Will try again in 5 seconds ...
	I0826 03:55:04.820473    2885 start.go:360] acquireMachinesLock for ha-139000-m02: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 03:55:04.821046    2885 start.go:364] duration metric: took 428.334µs to acquireMachinesLock for "ha-139000-m02"
	I0826 03:55:04.821243    2885 start.go:96] Skipping create...Using existing machine configuration
	I0826 03:55:04.821263    2885 fix.go:54] fixHost starting: m02
	I0826 03:55:04.822154    2885 fix.go:112] recreateIfNeeded on ha-139000-m02: state=Stopped err=<nil>
	W0826 03:55:04.822180    2885 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 03:55:04.826382    2885 out.go:177] * Restarting existing qemu2 VM for "ha-139000-m02" ...
	I0826 03:55:04.830367    2885 qemu.go:418] Using hvf for hardware acceleration
	I0826 03:55:04.830603    2885 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/ha-139000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/ha-139000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/ha-139000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:2f:9d:eb:95:0f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/ha-139000-m02/disk.qcow2
	I0826 03:55:04.839965    2885 main.go:141] libmachine: STDOUT: 
	I0826 03:55:04.840054    2885 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 03:55:04.840184    2885 fix.go:56] duration metric: took 18.916125ms for fixHost
	I0826 03:55:04.840212    2885 start.go:83] releasing machines lock for "ha-139000-m02", held for 19.105958ms
	W0826 03:55:04.840466    2885 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-139000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-139000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 03:55:04.844364    2885 out.go:201] 
	W0826 03:55:04.848442    2885 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0826 03:55:04.848468    2885 out.go:270] * 
	* 
	W0826 03:55:04.854888    2885 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0826 03:55:04.858387    2885 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:422: I0826 03:54:59.798720    2885 out.go:345] Setting OutFile to fd 1 ...
I0826 03:54:59.799025    2885 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0826 03:54:59.799029    2885 out.go:358] Setting ErrFile to fd 2...
I0826 03:54:59.799036    2885 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0826 03:54:59.799198    2885 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
I0826 03:54:59.799519    2885 mustload.go:65] Loading cluster: ha-139000
I0826 03:54:59.799825    2885 config.go:182] Loaded profile config "ha-139000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
W0826 03:54:59.800129    2885 host.go:58] "ha-139000-m02" host status: Stopped
I0826 03:54:59.804655    2885 out.go:177] * Starting "ha-139000-m02" control-plane node in "ha-139000" cluster
I0826 03:54:59.808444    2885 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
I0826 03:54:59.808458    2885 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
I0826 03:54:59.808469    2885 cache.go:56] Caching tarball of preloaded images
I0826 03:54:59.808562    2885 preload.go:172] Found /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0826 03:54:59.808570    2885 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
I0826 03:54:59.808644    2885 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/ha-139000/config.json ...
I0826 03:54:59.809044    2885 start.go:360] acquireMachinesLock for ha-139000-m02: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0826 03:54:59.809095    2885 start.go:364] duration metric: took 36.25µs to acquireMachinesLock for "ha-139000-m02"
I0826 03:54:59.809105    2885 start.go:96] Skipping create...Using existing machine configuration
I0826 03:54:59.809114    2885 fix.go:54] fixHost starting: m02
I0826 03:54:59.809269    2885 fix.go:112] recreateIfNeeded on ha-139000-m02: state=Stopped err=<nil>
W0826 03:54:59.809276    2885 fix.go:138] unexpected machine state, will restart: <nil>
I0826 03:54:59.813538    2885 out.go:177] * Restarting existing qemu2 VM for "ha-139000-m02" ...
I0826 03:54:59.816484    2885 qemu.go:418] Using hvf for hardware acceleration
I0826 03:54:59.816535    2885 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/ha-139000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/ha-139000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/ha-139000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:2f:9d:eb:95:0f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/ha-139000-m02/disk.qcow2
I0826 03:54:59.819322    2885 main.go:141] libmachine: STDOUT: 
I0826 03:54:59.819345    2885 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0826 03:54:59.819370    2885 fix.go:56] duration metric: took 10.256792ms for fixHost
I0826 03:54:59.819375    2885 start.go:83] releasing machines lock for "ha-139000-m02", held for 10.274083ms
W0826 03:54:59.819383    2885 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0826 03:54:59.819423    2885 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0826 03:54:59.819427    2885 start.go:729] Will try again in 5 seconds ...
I0826 03:55:04.820473    2885 start.go:360] acquireMachinesLock for ha-139000-m02: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0826 03:55:04.821046    2885 start.go:364] duration metric: took 428.334µs to acquireMachinesLock for "ha-139000-m02"
I0826 03:55:04.821243    2885 start.go:96] Skipping create...Using existing machine configuration
I0826 03:55:04.821263    2885 fix.go:54] fixHost starting: m02
I0826 03:55:04.822154    2885 fix.go:112] recreateIfNeeded on ha-139000-m02: state=Stopped err=<nil>
W0826 03:55:04.822180    2885 fix.go:138] unexpected machine state, will restart: <nil>
I0826 03:55:04.826382    2885 out.go:177] * Restarting existing qemu2 VM for "ha-139000-m02" ...
I0826 03:55:04.830367    2885 qemu.go:418] Using hvf for hardware acceleration
I0826 03:55:04.830603    2885 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/ha-139000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/ha-139000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/ha-139000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:2f:9d:eb:95:0f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/ha-139000-m02/disk.qcow2
I0826 03:55:04.839965    2885 main.go:141] libmachine: STDOUT: 
I0826 03:55:04.840054    2885 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0826 03:55:04.840184    2885 fix.go:56] duration metric: took 18.916125ms for fixHost
I0826 03:55:04.840212    2885 start.go:83] releasing machines lock for "ha-139000-m02", held for 19.105958ms
W0826 03:55:04.840466    2885 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-139000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-139000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0826 03:55:04.844364    2885 out.go:201] 
W0826 03:55:04.848442    2885 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0826 03:55:04.848468    2885 out.go:270] * 
* 
W0826 03:55:04.854888    2885 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0826 03:55:04.858387    2885 out.go:201] 

                                                
                                                
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-139000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 status -v=7 --alsologtostderr
E0826 03:55:11.524265    1539 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/functional-690000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-139000 status -v=7 --alsologtostderr: exit status 7 (2m57.143406292s)

                                                
                                                
-- stdout --
	ha-139000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-139000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-139000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-139000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 03:55:04.921753    2890 out.go:345] Setting OutFile to fd 1 ...
	I0826 03:55:04.921934    2890 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 03:55:04.921939    2890 out.go:358] Setting ErrFile to fd 2...
	I0826 03:55:04.921942    2890 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 03:55:04.922086    2890 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 03:55:04.922247    2890 out.go:352] Setting JSON to false
	I0826 03:55:04.922262    2890 mustload.go:65] Loading cluster: ha-139000
	I0826 03:55:04.922298    2890 notify.go:220] Checking for updates...
	I0826 03:55:04.922557    2890 config.go:182] Loaded profile config "ha-139000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 03:55:04.922564    2890 status.go:255] checking status of ha-139000 ...
	I0826 03:55:04.923365    2890 status.go:330] ha-139000 host status = "Running" (err=<nil>)
	I0826 03:55:04.923375    2890 host.go:66] Checking if "ha-139000" exists ...
	I0826 03:55:04.923495    2890 host.go:66] Checking if "ha-139000" exists ...
	I0826 03:55:04.923630    2890 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0826 03:55:04.923639    2890 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/ha-139000/id_rsa Username:docker}
	W0826 03:55:04.923844    2890 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0826 03:55:04.923866    2890 retry.go:31] will retry after 335.808604ms: dial tcp 192.168.105.5:22: connect: host is down
	W0826 03:55:05.262304    2890 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0826 03:55:05.262483    2890 retry.go:31] will retry after 264.058698ms: dial tcp 192.168.105.5:22: connect: host is down
	W0826 03:55:05.529113    2890 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0826 03:55:05.529208    2890 retry.go:31] will retry after 546.090556ms: dial tcp 192.168.105.5:22: connect: host is down
	W0826 03:55:31.997374    2890 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0826 03:55:31.997461    2890 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0826 03:55:31.997472    2890 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0826 03:55:31.997477    2890 status.go:257] ha-139000 status: &{Name:ha-139000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0826 03:55:31.997489    2890 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0826 03:55:31.997513    2890 status.go:255] checking status of ha-139000-m02 ...
	I0826 03:55:31.997797    2890 status.go:330] ha-139000-m02 host status = "Stopped" (err=<nil>)
	I0826 03:55:31.997802    2890 status.go:343] host is not running, skipping remaining checks
	I0826 03:55:31.997804    2890 status.go:257] ha-139000-m02 status: &{Name:ha-139000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0826 03:55:31.997814    2890 status.go:255] checking status of ha-139000-m03 ...
	I0826 03:55:31.998511    2890 status.go:330] ha-139000-m03 host status = "Running" (err=<nil>)
	I0826 03:55:31.998518    2890 host.go:66] Checking if "ha-139000-m03" exists ...
	I0826 03:55:31.998627    2890 host.go:66] Checking if "ha-139000-m03" exists ...
	I0826 03:55:31.998756    2890 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0826 03:55:31.998767    2890 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/ha-139000-m03/id_rsa Username:docker}
	W0826 03:56:47.000426    2890 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0826 03:56:47.000675    2890 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0826 03:56:47.000714    2890 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0826 03:56:47.000732    2890 status.go:257] ha-139000-m03 status: &{Name:ha-139000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0826 03:56:47.000774    2890 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0826 03:56:47.000794    2890 status.go:255] checking status of ha-139000-m04 ...
	I0826 03:56:47.003810    2890 status.go:330] ha-139000-m04 host status = "Running" (err=<nil>)
	I0826 03:56:47.003841    2890 host.go:66] Checking if "ha-139000-m04" exists ...
	I0826 03:56:47.004340    2890 host.go:66] Checking if "ha-139000-m04" exists ...
	I0826 03:56:47.004865    2890 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0826 03:56:47.004893    2890 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/ha-139000-m04/id_rsa Username:docker}
	W0826 03:58:02.006225    2890 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0826 03:58:02.006274    2890 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0826 03:58:02.006281    2890 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0826 03:58:02.006286    2890 status.go:257] ha-139000-m04 status: &{Name:ha-139000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0826 03:58:02.006294    2890 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-139000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-139000 -n ha-139000
E0826 03:58:21.670305    1539 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/addons-293000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-139000 -n ha-139000: exit status 3 (25.959762833s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0826 03:58:27.965728    2932 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0826 03:58:27.965738    2932 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-139000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (208.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-139000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-139000 -v=7 --alsologtostderr
E0826 04:03:21.665323    1539 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/addons-293000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-139000 -v=7 --alsologtostderr: (3m49.024683917s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-139000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-139000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.229841208s)

                                                
                                                
-- stdout --
	* [ha-139000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-139000" primary control-plane node in "ha-139000" cluster
	* Restarting existing qemu2 VM for "ha-139000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-139000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:03:36.058302    3335 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:03:36.058518    3335 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:03:36.058523    3335 out.go:358] Setting ErrFile to fd 2...
	I0826 04:03:36.058526    3335 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:03:36.058685    3335 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:03:36.060000    3335 out.go:352] Setting JSON to false
	I0826 04:03:36.079854    3335 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1979,"bootTime":1724668237,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0826 04:03:36.079925    3335 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0826 04:03:36.084930    3335 out.go:177] * [ha-139000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0826 04:03:36.093099    3335 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 04:03:36.093147    3335 notify.go:220] Checking for updates...
	I0826 04:03:36.100136    3335 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	I0826 04:03:36.103955    3335 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0826 04:03:36.108033    3335 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 04:03:36.111090    3335 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	I0826 04:03:36.114023    3335 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 04:03:36.117404    3335 config.go:182] Loaded profile config "ha-139000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:03:36.117458    3335 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 04:03:36.122053    3335 out.go:177] * Using the qemu2 driver based on existing profile
	I0826 04:03:36.129111    3335 start.go:297] selected driver: qemu2
	I0826 04:03:36.129121    3335 start.go:901] validating driver "qemu2" against &{Name:ha-139000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.0 ClusterName:ha-139000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 04:03:36.129216    3335 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 04:03:36.131734    3335 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 04:03:36.131759    3335 cni.go:84] Creating CNI manager for ""
	I0826 04:03:36.131763    3335 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0826 04:03:36.131814    3335 start.go:340] cluster config:
	{Name:ha-139000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-139000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 04:03:36.135659    3335 iso.go:125] acquiring lock: {Name:mk859bee1c7de58c8a10e75b01bd87b0e1e74bdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:03:36.144020    3335 out.go:177] * Starting "ha-139000" primary control-plane node in "ha-139000" cluster
	I0826 04:03:36.148057    3335 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0826 04:03:36.148073    3335 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0826 04:03:36.148081    3335 cache.go:56] Caching tarball of preloaded images
	I0826 04:03:36.148137    3335 preload.go:172] Found /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0826 04:03:36.148143    3335 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0826 04:03:36.148217    3335 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/ha-139000/config.json ...
	I0826 04:03:36.148654    3335 start.go:360] acquireMachinesLock for ha-139000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:03:36.148689    3335 start.go:364] duration metric: took 28.75µs to acquireMachinesLock for "ha-139000"
	I0826 04:03:36.148698    3335 start.go:96] Skipping create...Using existing machine configuration
	I0826 04:03:36.148704    3335 fix.go:54] fixHost starting: 
	I0826 04:03:36.148837    3335 fix.go:112] recreateIfNeeded on ha-139000: state=Stopped err=<nil>
	W0826 04:03:36.148846    3335 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 04:03:36.153066    3335 out.go:177] * Restarting existing qemu2 VM for "ha-139000" ...
	I0826 04:03:36.160921    3335 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:03:36.160959    3335 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/ha-139000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/ha-139000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/ha-139000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:63:eb:4f:8c:f2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/ha-139000/disk.qcow2
	I0826 04:03:36.163040    3335 main.go:141] libmachine: STDOUT: 
	I0826 04:03:36.163060    3335 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:03:36.163094    3335 fix.go:56] duration metric: took 14.391333ms for fixHost
	I0826 04:03:36.163098    3335 start.go:83] releasing machines lock for "ha-139000", held for 14.404875ms
	W0826 04:03:36.163106    3335 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0826 04:03:36.163142    3335 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:03:36.163146    3335 start.go:729] Will try again in 5 seconds ...
	I0826 04:03:41.165275    3335 start.go:360] acquireMachinesLock for ha-139000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:03:41.165692    3335 start.go:364] duration metric: took 339µs to acquireMachinesLock for "ha-139000"
	I0826 04:03:41.165832    3335 start.go:96] Skipping create...Using existing machine configuration
	I0826 04:03:41.165851    3335 fix.go:54] fixHost starting: 
	I0826 04:03:41.166552    3335 fix.go:112] recreateIfNeeded on ha-139000: state=Stopped err=<nil>
	W0826 04:03:41.166578    3335 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 04:03:41.170034    3335 out.go:177] * Restarting existing qemu2 VM for "ha-139000" ...
	I0826 04:03:41.177901    3335 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:03:41.178127    3335 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/ha-139000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/ha-139000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/ha-139000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:63:eb:4f:8c:f2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/ha-139000/disk.qcow2
	I0826 04:03:41.186991    3335 main.go:141] libmachine: STDOUT: 
	I0826 04:03:41.187069    3335 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:03:41.187156    3335 fix.go:56] duration metric: took 21.304625ms for fixHost
	I0826 04:03:41.187175    3335 start.go:83] releasing machines lock for "ha-139000", held for 21.461125ms
	W0826 04:03:41.187387    3335 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-139000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-139000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:03:41.195940    3335 out.go:201] 
	W0826 04:03:41.199995    3335 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0826 04:03:41.200031    3335 out.go:270] * 
	* 
	W0826 04:03:41.202964    3335 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0826 04:03:41.208961    3335 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-139000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-139000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-139000 -n ha-139000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-139000 -n ha-139000: exit status 7 (32.913375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-139000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-139000 node delete m03 -v=7 --alsologtostderr: exit status 83 (41.467833ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-139000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-139000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:03:41.354082    3347 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:03:41.354314    3347 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:03:41.354317    3347 out.go:358] Setting ErrFile to fd 2...
	I0826 04:03:41.354320    3347 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:03:41.354449    3347 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:03:41.354674    3347 mustload.go:65] Loading cluster: ha-139000
	I0826 04:03:41.354895    3347 config.go:182] Loaded profile config "ha-139000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	W0826 04:03:41.355242    3347 out.go:270] ! The control-plane node ha-139000 host is not running (will try others): state=Stopped
	! The control-plane node ha-139000 host is not running (will try others): state=Stopped
	W0826 04:03:41.355349    3347 out.go:270] ! The control-plane node ha-139000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-139000-m02 host is not running (will try others): state=Stopped
	I0826 04:03:41.360422    3347 out.go:177] * The control-plane node ha-139000-m03 host is not running: state=Stopped
	I0826 04:03:41.363286    3347 out.go:177]   To start a cluster, run: "minikube start -p ha-139000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-139000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-139000 status -v=7 --alsologtostderr: exit status 7 (30.291ms)

                                                
                                                
-- stdout --
	ha-139000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-139000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-139000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-139000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:03:41.395378    3349 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:03:41.395536    3349 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:03:41.395539    3349 out.go:358] Setting ErrFile to fd 2...
	I0826 04:03:41.395541    3349 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:03:41.395679    3349 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:03:41.395804    3349 out.go:352] Setting JSON to false
	I0826 04:03:41.395815    3349 mustload.go:65] Loading cluster: ha-139000
	I0826 04:03:41.395878    3349 notify.go:220] Checking for updates...
	I0826 04:03:41.396057    3349 config.go:182] Loaded profile config "ha-139000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:03:41.396063    3349 status.go:255] checking status of ha-139000 ...
	I0826 04:03:41.396269    3349 status.go:330] ha-139000 host status = "Stopped" (err=<nil>)
	I0826 04:03:41.396273    3349 status.go:343] host is not running, skipping remaining checks
	I0826 04:03:41.396275    3349 status.go:257] ha-139000 status: &{Name:ha-139000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0826 04:03:41.396285    3349 status.go:255] checking status of ha-139000-m02 ...
	I0826 04:03:41.396375    3349 status.go:330] ha-139000-m02 host status = "Stopped" (err=<nil>)
	I0826 04:03:41.396378    3349 status.go:343] host is not running, skipping remaining checks
	I0826 04:03:41.396380    3349 status.go:257] ha-139000-m02 status: &{Name:ha-139000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0826 04:03:41.396384    3349 status.go:255] checking status of ha-139000-m03 ...
	I0826 04:03:41.396470    3349 status.go:330] ha-139000-m03 host status = "Stopped" (err=<nil>)
	I0826 04:03:41.396473    3349 status.go:343] host is not running, skipping remaining checks
	I0826 04:03:41.396475    3349 status.go:257] ha-139000-m03 status: &{Name:ha-139000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0826 04:03:41.396480    3349 status.go:255] checking status of ha-139000-m04 ...
	I0826 04:03:41.396576    3349 status.go:330] ha-139000-m04 host status = "Stopped" (err=<nil>)
	I0826 04:03:41.396578    3349 status.go:343] host is not running, skipping remaining checks
	I0826 04:03:41.396580    3349 status.go:257] ha-139000-m04 status: &{Name:ha-139000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-139000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-139000 -n ha-139000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-139000 -n ha-139000: exit status 7 (29.713708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-139000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-139000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-139000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-139000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-139000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-139000 -n ha-139000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-139000 -n ha-139000: exit status 7 (29.12325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-139000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (202.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 stop -v=7 --alsologtostderr
E0826 04:04:43.790177    1539 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/functional-690000/client.crt: no such file or directory" logger="UnhandledError"
E0826 04:06:06.877498    1539 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/functional-690000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-139000 stop -v=7 --alsologtostderr: (3m21.973403666s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-139000 status -v=7 --alsologtostderr: exit status 7 (65.52975ms)

                                                
                                                
-- stdout --
	ha-139000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-139000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-139000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-139000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:07:03.536958    3390 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:07:03.537163    3390 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:07:03.537168    3390 out.go:358] Setting ErrFile to fd 2...
	I0826 04:07:03.537171    3390 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:07:03.537340    3390 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:07:03.537512    3390 out.go:352] Setting JSON to false
	I0826 04:07:03.537526    3390 mustload.go:65] Loading cluster: ha-139000
	I0826 04:07:03.537557    3390 notify.go:220] Checking for updates...
	I0826 04:07:03.537816    3390 config.go:182] Loaded profile config "ha-139000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:07:03.537823    3390 status.go:255] checking status of ha-139000 ...
	I0826 04:07:03.538108    3390 status.go:330] ha-139000 host status = "Stopped" (err=<nil>)
	I0826 04:07:03.538113    3390 status.go:343] host is not running, skipping remaining checks
	I0826 04:07:03.538116    3390 status.go:257] ha-139000 status: &{Name:ha-139000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0826 04:07:03.538129    3390 status.go:255] checking status of ha-139000-m02 ...
	I0826 04:07:03.538265    3390 status.go:330] ha-139000-m02 host status = "Stopped" (err=<nil>)
	I0826 04:07:03.538272    3390 status.go:343] host is not running, skipping remaining checks
	I0826 04:07:03.538275    3390 status.go:257] ha-139000-m02 status: &{Name:ha-139000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0826 04:07:03.538280    3390 status.go:255] checking status of ha-139000-m03 ...
	I0826 04:07:03.538416    3390 status.go:330] ha-139000-m03 host status = "Stopped" (err=<nil>)
	I0826 04:07:03.538421    3390 status.go:343] host is not running, skipping remaining checks
	I0826 04:07:03.538423    3390 status.go:257] ha-139000-m03 status: &{Name:ha-139000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0826 04:07:03.538431    3390 status.go:255] checking status of ha-139000-m04 ...
	I0826 04:07:03.538555    3390 status.go:330] ha-139000-m04 host status = "Stopped" (err=<nil>)
	I0826 04:07:03.538559    3390 status.go:343] host is not running, skipping remaining checks
	I0826 04:07:03.538562    3390 status.go:257] ha-139000-m04 status: &{Name:ha-139000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-139000 status -v=7 --alsologtostderr": ha-139000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-139000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-139000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-139000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-139000 status -v=7 --alsologtostderr": ha-139000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-139000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-139000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-139000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-139000 status -v=7 --alsologtostderr": ha-139000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-139000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-139000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-139000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-139000 -n ha-139000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-139000 -n ha-139000: exit status 7 (34.04325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-139000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (202.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-139000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-139000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.180036s)

                                                
                                                
-- stdout --
	* [ha-139000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-139000" primary control-plane node in "ha-139000" cluster
	* Restarting existing qemu2 VM for "ha-139000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-139000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:07:03.601669    3394 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:07:03.601850    3394 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:07:03.601853    3394 out.go:358] Setting ErrFile to fd 2...
	I0826 04:07:03.601855    3394 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:07:03.601990    3394 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:07:03.603019    3394 out.go:352] Setting JSON to false
	I0826 04:07:03.619203    3394 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2186,"bootTime":1724668237,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0826 04:07:03.619280    3394 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0826 04:07:03.624607    3394 out.go:177] * [ha-139000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0826 04:07:03.631564    3394 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 04:07:03.631600    3394 notify.go:220] Checking for updates...
	I0826 04:07:03.639506    3394 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	I0826 04:07:03.642594    3394 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0826 04:07:03.645512    3394 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 04:07:03.648550    3394 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	I0826 04:07:03.651453    3394 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 04:07:03.654873    3394 config.go:182] Loaded profile config "ha-139000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:07:03.655134    3394 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 04:07:03.659541    3394 out.go:177] * Using the qemu2 driver based on existing profile
	I0826 04:07:03.666577    3394 start.go:297] selected driver: qemu2
	I0826 04:07:03.666583    3394 start.go:901] validating driver "qemu2" against &{Name:ha-139000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.0 ClusterName:ha-139000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 04:07:03.666668    3394 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 04:07:03.668955    3394 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 04:07:03.668993    3394 cni.go:84] Creating CNI manager for ""
	I0826 04:07:03.668998    3394 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0826 04:07:03.669047    3394 start.go:340] cluster config:
	{Name:ha-139000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-139000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 04:07:03.672503    3394 iso.go:125] acquiring lock: {Name:mk859bee1c7de58c8a10e75b01bd87b0e1e74bdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:07:03.680494    3394 out.go:177] * Starting "ha-139000" primary control-plane node in "ha-139000" cluster
	I0826 04:07:03.684568    3394 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0826 04:07:03.684589    3394 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0826 04:07:03.684609    3394 cache.go:56] Caching tarball of preloaded images
	I0826 04:07:03.684673    3394 preload.go:172] Found /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0826 04:07:03.684679    3394 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0826 04:07:03.684769    3394 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/ha-139000/config.json ...
	I0826 04:07:03.685216    3394 start.go:360] acquireMachinesLock for ha-139000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:07:03.685250    3394 start.go:364] duration metric: took 27.667µs to acquireMachinesLock for "ha-139000"
	I0826 04:07:03.685258    3394 start.go:96] Skipping create...Using existing machine configuration
	I0826 04:07:03.685264    3394 fix.go:54] fixHost starting: 
	I0826 04:07:03.685386    3394 fix.go:112] recreateIfNeeded on ha-139000: state=Stopped err=<nil>
	W0826 04:07:03.685394    3394 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 04:07:03.689567    3394 out.go:177] * Restarting existing qemu2 VM for "ha-139000" ...
	I0826 04:07:03.696483    3394 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:07:03.696524    3394 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/ha-139000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/ha-139000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/ha-139000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:63:eb:4f:8c:f2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/ha-139000/disk.qcow2
	I0826 04:07:03.698504    3394 main.go:141] libmachine: STDOUT: 
	I0826 04:07:03.698520    3394 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:07:03.698555    3394 fix.go:56] duration metric: took 13.292375ms for fixHost
	I0826 04:07:03.698559    3394 start.go:83] releasing machines lock for "ha-139000", held for 13.305334ms
	W0826 04:07:03.698566    3394 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0826 04:07:03.698601    3394 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:07:03.698605    3394 start.go:729] Will try again in 5 seconds ...
	I0826 04:07:08.700713    3394 start.go:360] acquireMachinesLock for ha-139000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:07:08.701160    3394 start.go:364] duration metric: took 309.667µs to acquireMachinesLock for "ha-139000"
	I0826 04:07:08.701283    3394 start.go:96] Skipping create...Using existing machine configuration
	I0826 04:07:08.701303    3394 fix.go:54] fixHost starting: 
	I0826 04:07:08.701940    3394 fix.go:112] recreateIfNeeded on ha-139000: state=Stopped err=<nil>
	W0826 04:07:08.701965    3394 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 04:07:08.706382    3394 out.go:177] * Restarting existing qemu2 VM for "ha-139000" ...
	I0826 04:07:08.710382    3394 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:07:08.710667    3394 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/ha-139000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/ha-139000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/ha-139000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:63:eb:4f:8c:f2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/ha-139000/disk.qcow2
	I0826 04:07:08.719686    3394 main.go:141] libmachine: STDOUT: 
	I0826 04:07:08.719765    3394 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:07:08.719848    3394 fix.go:56] duration metric: took 18.546041ms for fixHost
	I0826 04:07:08.719867    3394 start.go:83] releasing machines lock for "ha-139000", held for 18.681334ms
	W0826 04:07:08.720093    3394 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-139000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-139000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:07:08.727397    3394 out.go:201] 
	W0826 04:07:08.730448    3394 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0826 04:07:08.730472    3394 out.go:270] * 
	* 
	W0826 04:07:08.733117    3394 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0826 04:07:08.741294    3394 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-139000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-139000 -n ha-139000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-139000 -n ha-139000: exit status 7 (70.474834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-139000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-139000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-139000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-139000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-139000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-139000 -n ha-139000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-139000 -n ha-139000: exit status 7 (29.762792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-139000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-139000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-139000 --control-plane -v=7 --alsologtostderr: exit status 83 (41.7835ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-139000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-139000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:07:08.935620    3409 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:07:08.935765    3409 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:07:08.935769    3409 out.go:358] Setting ErrFile to fd 2...
	I0826 04:07:08.935771    3409 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:07:08.935905    3409 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:07:08.936135    3409 mustload.go:65] Loading cluster: ha-139000
	I0826 04:07:08.936371    3409 config.go:182] Loaded profile config "ha-139000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	W0826 04:07:08.936691    3409 out.go:270] ! The control-plane node ha-139000 host is not running (will try others): state=Stopped
	! The control-plane node ha-139000 host is not running (will try others): state=Stopped
	W0826 04:07:08.936812    3409 out.go:270] ! The control-plane node ha-139000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-139000-m02 host is not running (will try others): state=Stopped
	I0826 04:07:08.940083    3409 out.go:177] * The control-plane node ha-139000-m03 host is not running: state=Stopped
	I0826 04:07:08.944155    3409 out.go:177]   To start a cluster, run: "minikube start -p ha-139000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-139000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-139000 -n ha-139000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-139000 -n ha-139000: exit status 7 (29.391583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-139000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.06s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-214000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-214000 --driver=qemu2 : exit status 80 (9.994513208s)

                                                
                                                
-- stdout --
	* [image-214000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-214000" primary control-plane node in "image-214000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-214000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-214000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-214000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-214000 -n image-214000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-214000 -n image-214000: exit status 7 (67.459542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-214000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.06s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.97s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-638000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-638000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.972202667s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2aa6b153-805d-4cb3-b76d-da94fc3e04f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-638000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"79510279-40d8-488e-b63a-3f8bf6b91c12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19501"}}
	{"specversion":"1.0","id":"3a39946f-2517-4de9-835b-c157df217b50","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig"}}
	{"specversion":"1.0","id":"e34feb5a-18ee-4810-bc88-e008aa30301e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"72fbc07b-1fcd-4b30-9d11-2d70458de15e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a6ee5435-218d-4e93-b693-a77f794203c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube"}}
	{"specversion":"1.0","id":"603cc7be-7a22-4373-baad-a709a06d2139","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6c79eb26-a19b-4b87-bafb-b611dd521c21","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"b62fa6e4-e5d8-4c23-bb22-0f4c7f701ef4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"6da3497b-f5f1-4bab-8fed-fca3116a6264","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-638000\" primary control-plane node in \"json-output-638000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"6ca4b340-9df0-416a-94c3-b9c5734497c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"7c01293a-3f09-4b0b-916f-717d07e3f265","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-638000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"0825d882-8914-4bd2-a32a-230b0f6d1d11","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"1e847510-27d6-40cd-b107-c091e41c9c88","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"9ac7cd34-8b55-4273-a483-fa6fd59133fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-638000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"daf7bd6e-3ef6-47a5-a891-5d854fb95cb6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"3a9648e4-c5ea-46f5-8243-424216f57b45","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-638000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.97s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-638000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-638000 --output=json --user=testUser: exit status 83 (76.409959ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4496288a-f441-408e-8fed-9d80e3b9a601","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-638000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"66d1b883-1817-462d-82b6-3784ad3575bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-638000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-638000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-638000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-638000 --output=json --user=testUser: exit status 83 (45.247958ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-638000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-638000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-638000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-638000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.34s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-524000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-524000 --driver=qemu2 : exit status 80 (10.063657834s)

                                                
                                                
-- stdout --
	* [first-524000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-524000" primary control-plane node in "first-524000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-524000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-524000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-524000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-08-26 04:07:43.770613 -0700 PDT m=+1991.400937543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-526000 -n second-526000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-526000 -n second-526000: exit status 85 (70.020917ms)

                                                
                                                
-- stdout --
	* Profile "second-526000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-526000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-526000" host is not running, skipping log retrieval (state="* Profile \"second-526000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-526000\"")
helpers_test.go:175: Cleaning up "second-526000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-526000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-08-26 04:07:43.942712 -0700 PDT m=+1991.573039334
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-524000 -n first-524000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-524000 -n first-524000: exit status 7 (29.477625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-524000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-524000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-524000
--- FAIL: TestMinikubeProfile (10.34s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-905000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-905000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.928213041s)

                                                
                                                
-- stdout --
	* [mount-start-1-905000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-905000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-905000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-905000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-905000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-905000 -n mount-start-1-905000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-905000 -n mount-start-1-905000: exit status 7 (68.517ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-905000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.00s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-143000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-143000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.836123042s)

                                                
                                                
-- stdout --
	* [multinode-143000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-143000" primary control-plane node in "multinode-143000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-143000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:07:54.257370    3565 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:07:54.257490    3565 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:07:54.257493    3565 out.go:358] Setting ErrFile to fd 2...
	I0826 04:07:54.257495    3565 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:07:54.257635    3565 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:07:54.258685    3565 out.go:352] Setting JSON to false
	I0826 04:07:54.274502    3565 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2237,"bootTime":1724668237,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0826 04:07:54.274568    3565 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0826 04:07:54.282396    3565 out.go:177] * [multinode-143000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0826 04:07:54.291299    3565 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 04:07:54.291340    3565 notify.go:220] Checking for updates...
	I0826 04:07:54.298235    3565 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	I0826 04:07:54.306381    3565 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0826 04:07:54.309272    3565 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 04:07:54.312235    3565 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	I0826 04:07:54.315325    3565 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 04:07:54.318468    3565 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 04:07:54.323203    3565 out.go:177] * Using the qemu2 driver based on user configuration
	I0826 04:07:54.330309    3565 start.go:297] selected driver: qemu2
	I0826 04:07:54.330318    3565 start.go:901] validating driver "qemu2" against <nil>
	I0826 04:07:54.330329    3565 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 04:07:54.332575    3565 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0826 04:07:54.336230    3565 out.go:177] * Automatically selected the socket_vmnet network
	I0826 04:07:54.339273    3565 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 04:07:54.339291    3565 cni.go:84] Creating CNI manager for ""
	I0826 04:07:54.339296    3565 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0826 04:07:54.339301    3565 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0826 04:07:54.339328    3565 start.go:340] cluster config:
	{Name:multinode-143000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-143000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 04:07:54.343279    3565 iso.go:125] acquiring lock: {Name:mk859bee1c7de58c8a10e75b01bd87b0e1e74bdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:07:54.349295    3565 out.go:177] * Starting "multinode-143000" primary control-plane node in "multinode-143000" cluster
	I0826 04:07:54.353191    3565 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0826 04:07:54.353211    3565 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0826 04:07:54.353221    3565 cache.go:56] Caching tarball of preloaded images
	I0826 04:07:54.353289    3565 preload.go:172] Found /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0826 04:07:54.353297    3565 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0826 04:07:54.353534    3565 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/multinode-143000/config.json ...
	I0826 04:07:54.353546    3565 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/multinode-143000/config.json: {Name:mk83f30d5695abc13ee2b84b7900c2b794d108d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 04:07:54.353945    3565 start.go:360] acquireMachinesLock for multinode-143000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:07:54.353982    3565 start.go:364] duration metric: took 30.583µs to acquireMachinesLock for "multinode-143000"
	I0826 04:07:54.353997    3565 start.go:93] Provisioning new machine with config: &{Name:multinode-143000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0 ClusterName:multinode-143000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0826 04:07:54.354033    3565 start.go:125] createHost starting for "" (driver="qemu2")
	I0826 04:07:54.362232    3565 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0826 04:07:54.380617    3565 start.go:159] libmachine.API.Create for "multinode-143000" (driver="qemu2")
	I0826 04:07:54.380646    3565 client.go:168] LocalClient.Create starting
	I0826 04:07:54.380719    3565 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem
	I0826 04:07:54.380752    3565 main.go:141] libmachine: Decoding PEM data...
	I0826 04:07:54.380762    3565 main.go:141] libmachine: Parsing certificate...
	I0826 04:07:54.380798    3565 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/cert.pem
	I0826 04:07:54.380826    3565 main.go:141] libmachine: Decoding PEM data...
	I0826 04:07:54.380839    3565 main.go:141] libmachine: Parsing certificate...
	I0826 04:07:54.381313    3565 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19501-1045/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0826 04:07:54.533510    3565 main.go:141] libmachine: Creating SSH key...
	I0826 04:07:54.595367    3565 main.go:141] libmachine: Creating Disk image...
	I0826 04:07:54.595376    3565 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0826 04:07:54.595552    3565 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/multinode-143000/disk.qcow2.raw /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/multinode-143000/disk.qcow2
	I0826 04:07:54.604697    3565 main.go:141] libmachine: STDOUT: 
	I0826 04:07:54.604717    3565 main.go:141] libmachine: STDERR: 
	I0826 04:07:54.604762    3565 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/multinode-143000/disk.qcow2 +20000M
	I0826 04:07:54.612642    3565 main.go:141] libmachine: STDOUT: Image resized.
	
	I0826 04:07:54.612674    3565 main.go:141] libmachine: STDERR: 
	I0826 04:07:54.612696    3565 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/multinode-143000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/multinode-143000/disk.qcow2
	I0826 04:07:54.612701    3565 main.go:141] libmachine: Starting QEMU VM...
	I0826 04:07:54.612715    3565 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:07:54.612758    3565 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/multinode-143000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/multinode-143000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/multinode-143000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:a1:54:f7:d8:6b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/multinode-143000/disk.qcow2
	I0826 04:07:54.614374    3565 main.go:141] libmachine: STDOUT: 
	I0826 04:07:54.614391    3565 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:07:54.614410    3565 client.go:171] duration metric: took 233.7635ms to LocalClient.Create
	I0826 04:07:56.616554    3565 start.go:128] duration metric: took 2.262536166s to createHost
	I0826 04:07:56.616633    3565 start.go:83] releasing machines lock for "multinode-143000", held for 2.262677625s
	W0826 04:07:56.616692    3565 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:07:56.627818    3565 out.go:177] * Deleting "multinode-143000" in qemu2 ...
	W0826 04:07:56.665051    3565 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:07:56.665074    3565 start.go:729] Will try again in 5 seconds ...
	I0826 04:08:01.667164    3565 start.go:360] acquireMachinesLock for multinode-143000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:08:01.667641    3565 start.go:364] duration metric: took 379.791µs to acquireMachinesLock for "multinode-143000"
	I0826 04:08:01.667782    3565 start.go:93] Provisioning new machine with config: &{Name:multinode-143000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0 ClusterName:multinode-143000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0826 04:08:01.668246    3565 start.go:125] createHost starting for "" (driver="qemu2")
	I0826 04:08:01.686095    3565 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0826 04:08:01.737034    3565 start.go:159] libmachine.API.Create for "multinode-143000" (driver="qemu2")
	I0826 04:08:01.737083    3565 client.go:168] LocalClient.Create starting
	I0826 04:08:01.737184    3565 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem
	I0826 04:08:01.737249    3565 main.go:141] libmachine: Decoding PEM data...
	I0826 04:08:01.737267    3565 main.go:141] libmachine: Parsing certificate...
	I0826 04:08:01.737324    3565 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/cert.pem
	I0826 04:08:01.737367    3565 main.go:141] libmachine: Decoding PEM data...
	I0826 04:08:01.737381    3565 main.go:141] libmachine: Parsing certificate...
	I0826 04:08:01.737985    3565 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19501-1045/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0826 04:08:01.902757    3565 main.go:141] libmachine: Creating SSH key...
	I0826 04:08:01.996644    3565 main.go:141] libmachine: Creating Disk image...
	I0826 04:08:01.996650    3565 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0826 04:08:01.996821    3565 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/multinode-143000/disk.qcow2.raw /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/multinode-143000/disk.qcow2
	I0826 04:08:02.005985    3565 main.go:141] libmachine: STDOUT: 
	I0826 04:08:02.006004    3565 main.go:141] libmachine: STDERR: 
	I0826 04:08:02.006050    3565 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/multinode-143000/disk.qcow2 +20000M
	I0826 04:08:02.013915    3565 main.go:141] libmachine: STDOUT: Image resized.
	
	I0826 04:08:02.013932    3565 main.go:141] libmachine: STDERR: 
	I0826 04:08:02.013942    3565 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/multinode-143000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/multinode-143000/disk.qcow2
	I0826 04:08:02.013947    3565 main.go:141] libmachine: Starting QEMU VM...
	I0826 04:08:02.013957    3565 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:08:02.013985    3565 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/multinode-143000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/multinode-143000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/multinode-143000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:bf:f1:7e:d7:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/multinode-143000/disk.qcow2
	I0826 04:08:02.015578    3565 main.go:141] libmachine: STDOUT: 
	I0826 04:08:02.015600    3565 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:08:02.015613    3565 client.go:171] duration metric: took 278.528375ms to LocalClient.Create
	I0826 04:08:04.017828    3565 start.go:128] duration metric: took 2.349569375s to createHost
	I0826 04:08:04.017894    3565 start.go:83] releasing machines lock for "multinode-143000", held for 2.350255542s
	W0826 04:08:04.018202    3565 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-143000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-143000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:08:04.031912    3565 out.go:201] 
	W0826 04:08:04.035942    3565 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0826 04:08:04.036003    3565 out.go:270] * 
	* 
	W0826 04:08:04.038810    3565 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0826 04:08:04.051866    3565 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-143000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-143000 -n multinode-143000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-143000 -n multinode-143000: exit status 7 (65.842791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-143000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.90s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (109.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-143000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-143000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (129.824ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-143000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-143000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-143000 -- rollout status deployment/busybox: exit status 1 (58.168041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-143000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-143000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-143000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (58.521667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-143000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-143000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-143000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.779625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-143000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-143000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-143000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (76.328791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-143000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-143000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-143000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.048833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-143000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-143000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-143000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.239542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-143000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-143000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-143000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.164541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-143000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
E0826 04:08:21.660815    1539 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/addons-293000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-143000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-143000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.963958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-143000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-143000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-143000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.814833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-143000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-143000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-143000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.249416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-143000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-143000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-143000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.454167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-143000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
E0826 04:09:43.785335    1539 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/functional-690000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-143000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-143000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.802208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-143000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-143000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-143000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.391292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-143000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-143000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-143000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.625542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-143000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-143000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-143000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.028208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-143000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-143000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-143000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (57.063375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-143000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-143000 -n multinode-143000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-143000 -n multinode-143000: exit status 7 (30.243459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-143000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (109.32s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-143000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-143000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.333125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-143000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-143000 -n multinode-143000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-143000 -n multinode-143000: exit status 7 (29.426375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-143000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-143000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-143000 -v 3 --alsologtostderr: exit status 83 (38.310083ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-143000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-143000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:09:53.564239    3658 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:09:53.564395    3658 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:09:53.564398    3658 out.go:358] Setting ErrFile to fd 2...
	I0826 04:09:53.564401    3658 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:09:53.564520    3658 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:09:53.564741    3658 mustload.go:65] Loading cluster: multinode-143000
	I0826 04:09:53.564947    3658 config.go:182] Loaded profile config "multinode-143000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:09:53.568706    3658 out.go:177] * The control-plane node multinode-143000 host is not running: state=Stopped
	I0826 04:09:53.572589    3658 out.go:177]   To start a cluster, run: "minikube start -p multinode-143000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-143000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-143000 -n multinode-143000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-143000 -n multinode-143000: exit status 7 (30.065625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-143000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-143000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-143000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (28.643709ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-143000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-143000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-143000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-143000 -n multinode-143000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-143000 -n multinode-143000: exit status 7 (29.249625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-143000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-143000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-143000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-143000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"multinode-143000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-143000 -n multinode-143000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-143000 -n multinode-143000: exit status 7 (30.12575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-143000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-143000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-143000 status --output json --alsologtostderr: exit status 7 (30.233125ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-143000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:09:53.770488    3670 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:09:53.770655    3670 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:09:53.770658    3670 out.go:358] Setting ErrFile to fd 2...
	I0826 04:09:53.770661    3670 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:09:53.770791    3670 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:09:53.770902    3670 out.go:352] Setting JSON to true
	I0826 04:09:53.770911    3670 mustload.go:65] Loading cluster: multinode-143000
	I0826 04:09:53.770976    3670 notify.go:220] Checking for updates...
	I0826 04:09:53.771098    3670 config.go:182] Loaded profile config "multinode-143000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:09:53.771104    3670 status.go:255] checking status of multinode-143000 ...
	I0826 04:09:53.771324    3670 status.go:330] multinode-143000 host status = "Stopped" (err=<nil>)
	I0826 04:09:53.771328    3670 status.go:343] host is not running, skipping remaining checks
	I0826 04:09:53.771331    3670 status.go:257] multinode-143000 status: &{Name:multinode-143000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-143000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-143000 -n multinode-143000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-143000 -n multinode-143000: exit status 7 (29.921166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-143000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-143000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-143000 node stop m03: exit status 85 (45.598583ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-143000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-143000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-143000 status: exit status 7 (29.08275ms)

                                                
                                                
-- stdout --
	multinode-143000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-143000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-143000 status --alsologtostderr: exit status 7 (29.724ms)

                                                
                                                
-- stdout --
	multinode-143000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:09:53.905631    3678 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:09:53.905767    3678 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:09:53.905770    3678 out.go:358] Setting ErrFile to fd 2...
	I0826 04:09:53.905772    3678 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:09:53.905901    3678 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:09:53.906022    3678 out.go:352] Setting JSON to false
	I0826 04:09:53.906032    3678 mustload.go:65] Loading cluster: multinode-143000
	I0826 04:09:53.906097    3678 notify.go:220] Checking for updates...
	I0826 04:09:53.906237    3678 config.go:182] Loaded profile config "multinode-143000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:09:53.906244    3678 status.go:255] checking status of multinode-143000 ...
	I0826 04:09:53.906441    3678 status.go:330] multinode-143000 host status = "Stopped" (err=<nil>)
	I0826 04:09:53.906445    3678 status.go:343] host is not running, skipping remaining checks
	I0826 04:09:53.906447    3678 status.go:257] multinode-143000 status: &{Name:multinode-143000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-143000 status --alsologtostderr": multinode-143000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-143000 -n multinode-143000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-143000 -n multinode-143000: exit status 7 (29.423084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-143000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (45.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-143000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-143000 node start m03 -v=7 --alsologtostderr: exit status 85 (46.472542ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:09:53.965712    3682 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:09:53.965989    3682 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:09:53.965992    3682 out.go:358] Setting ErrFile to fd 2...
	I0826 04:09:53.965994    3682 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:09:53.966107    3682 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:09:53.966345    3682 mustload.go:65] Loading cluster: multinode-143000
	I0826 04:09:53.966523    3682 config.go:182] Loaded profile config "multinode-143000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:09:53.969621    3682 out.go:201] 
	W0826 04:09:53.973655    3682 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0826 04:09:53.973660    3682 out.go:270] * 
	* 
	W0826 04:09:53.975374    3682 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0826 04:09:53.978592    3682 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0826 04:09:53.965712    3682 out.go:345] Setting OutFile to fd 1 ...
I0826 04:09:53.965989    3682 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0826 04:09:53.965992    3682 out.go:358] Setting ErrFile to fd 2...
I0826 04:09:53.965994    3682 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0826 04:09:53.966107    3682 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
I0826 04:09:53.966345    3682 mustload.go:65] Loading cluster: multinode-143000
I0826 04:09:53.966523    3682 config.go:182] Loaded profile config "multinode-143000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0826 04:09:53.969621    3682 out.go:201] 
W0826 04:09:53.973655    3682 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0826 04:09:53.973660    3682 out.go:270] * 
* 
W0826 04:09:53.975374    3682 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0826 04:09:53.978592    3682 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-143000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-143000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-143000 status -v=7 --alsologtostderr: exit status 7 (30.623458ms)

                                                
                                                
-- stdout --
	multinode-143000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:09:54.012546    3684 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:09:54.012734    3684 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:09:54.012742    3684 out.go:358] Setting ErrFile to fd 2...
	I0826 04:09:54.012744    3684 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:09:54.012877    3684 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:09:54.012995    3684 out.go:352] Setting JSON to false
	I0826 04:09:54.013008    3684 mustload.go:65] Loading cluster: multinode-143000
	I0826 04:09:54.013061    3684 notify.go:220] Checking for updates...
	I0826 04:09:54.013208    3684 config.go:182] Loaded profile config "multinode-143000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:09:54.013214    3684 status.go:255] checking status of multinode-143000 ...
	I0826 04:09:54.013420    3684 status.go:330] multinode-143000 host status = "Stopped" (err=<nil>)
	I0826 04:09:54.013424    3684 status.go:343] host is not running, skipping remaining checks
	I0826 04:09:54.013426    3684 status.go:257] multinode-143000 status: &{Name:multinode-143000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-143000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-143000 status -v=7 --alsologtostderr: exit status 7 (73.193583ms)

                                                
                                                
-- stdout --
	multinode-143000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:09:55.279237    3686 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:09:55.279435    3686 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:09:55.279439    3686 out.go:358] Setting ErrFile to fd 2...
	I0826 04:09:55.279443    3686 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:09:55.279627    3686 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:09:55.279785    3686 out.go:352] Setting JSON to false
	I0826 04:09:55.279797    3686 mustload.go:65] Loading cluster: multinode-143000
	I0826 04:09:55.279834    3686 notify.go:220] Checking for updates...
	I0826 04:09:55.280055    3686 config.go:182] Loaded profile config "multinode-143000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:09:55.280063    3686 status.go:255] checking status of multinode-143000 ...
	I0826 04:09:55.280321    3686 status.go:330] multinode-143000 host status = "Stopped" (err=<nil>)
	I0826 04:09:55.280326    3686 status.go:343] host is not running, skipping remaining checks
	I0826 04:09:55.280329    3686 status.go:257] multinode-143000 status: &{Name:multinode-143000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-143000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-143000 status -v=7 --alsologtostderr: exit status 7 (72.08825ms)

                                                
                                                
-- stdout --
	multinode-143000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:09:57.555065    3688 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:09:57.555326    3688 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:09:57.555331    3688 out.go:358] Setting ErrFile to fd 2...
	I0826 04:09:57.555335    3688 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:09:57.555527    3688 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:09:57.555709    3688 out.go:352] Setting JSON to false
	I0826 04:09:57.555724    3688 mustload.go:65] Loading cluster: multinode-143000
	I0826 04:09:57.555778    3688 notify.go:220] Checking for updates...
	I0826 04:09:57.556027    3688 config.go:182] Loaded profile config "multinode-143000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:09:57.556036    3688 status.go:255] checking status of multinode-143000 ...
	I0826 04:09:57.556346    3688 status.go:330] multinode-143000 host status = "Stopped" (err=<nil>)
	I0826 04:09:57.556352    3688 status.go:343] host is not running, skipping remaining checks
	I0826 04:09:57.556355    3688 status.go:257] multinode-143000 status: &{Name:multinode-143000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-143000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-143000 status -v=7 --alsologtostderr: exit status 7 (71.096625ms)

                                                
                                                
-- stdout --
	multinode-143000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:09:59.487252    3692 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:09:59.487442    3692 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:09:59.487446    3692 out.go:358] Setting ErrFile to fd 2...
	I0826 04:09:59.487449    3692 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:09:59.487640    3692 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:09:59.487793    3692 out.go:352] Setting JSON to false
	I0826 04:09:59.487806    3692 mustload.go:65] Loading cluster: multinode-143000
	I0826 04:09:59.487842    3692 notify.go:220] Checking for updates...
	I0826 04:09:59.488065    3692 config.go:182] Loaded profile config "multinode-143000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:09:59.488073    3692 status.go:255] checking status of multinode-143000 ...
	I0826 04:09:59.488348    3692 status.go:330] multinode-143000 host status = "Stopped" (err=<nil>)
	I0826 04:09:59.488352    3692 status.go:343] host is not running, skipping remaining checks
	I0826 04:09:59.488355    3692 status.go:257] multinode-143000 status: &{Name:multinode-143000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-143000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-143000 status -v=7 --alsologtostderr: exit status 7 (73.196584ms)

                                                
                                                
-- stdout --
	multinode-143000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:10:02.629896    3696 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:10:02.630082    3696 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:10:02.630086    3696 out.go:358] Setting ErrFile to fd 2...
	I0826 04:10:02.630089    3696 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:10:02.630271    3696 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:10:02.630422    3696 out.go:352] Setting JSON to false
	I0826 04:10:02.630434    3696 mustload.go:65] Loading cluster: multinode-143000
	I0826 04:10:02.630472    3696 notify.go:220] Checking for updates...
	I0826 04:10:02.630719    3696 config.go:182] Loaded profile config "multinode-143000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:10:02.630727    3696 status.go:255] checking status of multinode-143000 ...
	I0826 04:10:02.630998    3696 status.go:330] multinode-143000 host status = "Stopped" (err=<nil>)
	I0826 04:10:02.631003    3696 status.go:343] host is not running, skipping remaining checks
	I0826 04:10:02.631006    3696 status.go:257] multinode-143000 status: &{Name:multinode-143000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-143000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-143000 status -v=7 --alsologtostderr: exit status 7 (70.9965ms)

                                                
                                                
-- stdout --
	multinode-143000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:10:05.238391    3698 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:10:05.238590    3698 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:10:05.238595    3698 out.go:358] Setting ErrFile to fd 2...
	I0826 04:10:05.238598    3698 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:10:05.238760    3698 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:10:05.238921    3698 out.go:352] Setting JSON to false
	I0826 04:10:05.238935    3698 mustload.go:65] Loading cluster: multinode-143000
	I0826 04:10:05.238979    3698 notify.go:220] Checking for updates...
	I0826 04:10:05.239180    3698 config.go:182] Loaded profile config "multinode-143000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:10:05.239188    3698 status.go:255] checking status of multinode-143000 ...
	I0826 04:10:05.239459    3698 status.go:330] multinode-143000 host status = "Stopped" (err=<nil>)
	I0826 04:10:05.239465    3698 status.go:343] host is not running, skipping remaining checks
	I0826 04:10:05.239468    3698 status.go:257] multinode-143000 status: &{Name:multinode-143000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-143000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-143000 status -v=7 --alsologtostderr: exit status 7 (73.972208ms)

                                                
                                                
-- stdout --
	multinode-143000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:10:16.125248    3702 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:10:16.125469    3702 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:10:16.125475    3702 out.go:358] Setting ErrFile to fd 2...
	I0826 04:10:16.125478    3702 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:10:16.125655    3702 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:10:16.125816    3702 out.go:352] Setting JSON to false
	I0826 04:10:16.125832    3702 mustload.go:65] Loading cluster: multinode-143000
	I0826 04:10:16.125867    3702 notify.go:220] Checking for updates...
	I0826 04:10:16.126088    3702 config.go:182] Loaded profile config "multinode-143000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:10:16.126096    3702 status.go:255] checking status of multinode-143000 ...
	I0826 04:10:16.126359    3702 status.go:330] multinode-143000 host status = "Stopped" (err=<nil>)
	I0826 04:10:16.126364    3702 status.go:343] host is not running, skipping remaining checks
	I0826 04:10:16.126367    3702 status.go:257] multinode-143000 status: &{Name:multinode-143000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-143000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-143000 status -v=7 --alsologtostderr: exit status 7 (71.704334ms)

                                                
                                                
-- stdout --
	multinode-143000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:10:24.250822    3704 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:10:24.251001    3704 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:10:24.251005    3704 out.go:358] Setting ErrFile to fd 2...
	I0826 04:10:24.251009    3704 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:10:24.251186    3704 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:10:24.251340    3704 out.go:352] Setting JSON to false
	I0826 04:10:24.251352    3704 mustload.go:65] Loading cluster: multinode-143000
	I0826 04:10:24.251396    3704 notify.go:220] Checking for updates...
	I0826 04:10:24.251601    3704 config.go:182] Loaded profile config "multinode-143000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:10:24.251608    3704 status.go:255] checking status of multinode-143000 ...
	I0826 04:10:24.251888    3704 status.go:330] multinode-143000 host status = "Stopped" (err=<nil>)
	I0826 04:10:24.251893    3704 status.go:343] host is not running, skipping remaining checks
	I0826 04:10:24.251896    3704 status.go:257] multinode-143000 status: &{Name:multinode-143000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-143000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-143000 status -v=7 --alsologtostderr: exit status 7 (74.306625ms)

                                                
                                                
-- stdout --
	multinode-143000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:10:39.427403    3713 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:10:39.427883    3713 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:10:39.427890    3713 out.go:358] Setting ErrFile to fd 2...
	I0826 04:10:39.427894    3713 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:10:39.428156    3713 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:10:39.428395    3713 out.go:352] Setting JSON to false
	I0826 04:10:39.428424    3713 mustload.go:65] Loading cluster: multinode-143000
	I0826 04:10:39.428516    3713 notify.go:220] Checking for updates...
	I0826 04:10:39.429119    3713 config.go:182] Loaded profile config "multinode-143000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:10:39.429130    3713 status.go:255] checking status of multinode-143000 ...
	I0826 04:10:39.429396    3713 status.go:330] multinode-143000 host status = "Stopped" (err=<nil>)
	I0826 04:10:39.429401    3713 status.go:343] host is not running, skipping remaining checks
	I0826 04:10:39.429404    3713 status.go:257] multinode-143000 status: &{Name:multinode-143000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-143000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-143000 -n multinode-143000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-143000 -n multinode-143000: exit status 7 (33.370459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-143000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (45.53s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-143000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-143000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-143000: (3.621005792s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-143000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-143000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.228554583s)

                                                
                                                
-- stdout --
	* [multinode-143000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-143000" primary control-plane node in "multinode-143000" cluster
	* Restarting existing qemu2 VM for "multinode-143000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-143000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:10:43.183309    3738 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:10:43.183484    3738 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:10:43.183491    3738 out.go:358] Setting ErrFile to fd 2...
	I0826 04:10:43.183494    3738 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:10:43.183687    3738 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:10:43.185019    3738 out.go:352] Setting JSON to false
	I0826 04:10:43.205136    3738 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2406,"bootTime":1724668237,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0826 04:10:43.205219    3738 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0826 04:10:43.209781    3738 out.go:177] * [multinode-143000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0826 04:10:43.217082    3738 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 04:10:43.217143    3738 notify.go:220] Checking for updates...
	I0826 04:10:43.223923    3738 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	I0826 04:10:43.226950    3738 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0826 04:10:43.229883    3738 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 04:10:43.232941    3738 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	I0826 04:10:43.235932    3738 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 04:10:43.239202    3738 config.go:182] Loaded profile config "multinode-143000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:10:43.239255    3738 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 04:10:43.243962    3738 out.go:177] * Using the qemu2 driver based on existing profile
	I0826 04:10:43.249991    3738 start.go:297] selected driver: qemu2
	I0826 04:10:43.250000    3738 start.go:901] validating driver "qemu2" against &{Name:multinode-143000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:multinode-143000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 04:10:43.250074    3738 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 04:10:43.252642    3738 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 04:10:43.252700    3738 cni.go:84] Creating CNI manager for ""
	I0826 04:10:43.252705    3738 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0826 04:10:43.252751    3738 start.go:340] cluster config:
	{Name:multinode-143000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-143000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 04:10:43.256507    3738 iso.go:125] acquiring lock: {Name:mk859bee1c7de58c8a10e75b01bd87b0e1e74bdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:10:43.264006    3738 out.go:177] * Starting "multinode-143000" primary control-plane node in "multinode-143000" cluster
	I0826 04:10:43.267958    3738 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0826 04:10:43.267976    3738 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0826 04:10:43.267988    3738 cache.go:56] Caching tarball of preloaded images
	I0826 04:10:43.268059    3738 preload.go:172] Found /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0826 04:10:43.268065    3738 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0826 04:10:43.268124    3738 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/multinode-143000/config.json ...
	I0826 04:10:43.268611    3738 start.go:360] acquireMachinesLock for multinode-143000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:10:43.268647    3738 start.go:364] duration metric: took 30.25µs to acquireMachinesLock for "multinode-143000"
	I0826 04:10:43.268656    3738 start.go:96] Skipping create...Using existing machine configuration
	I0826 04:10:43.268662    3738 fix.go:54] fixHost starting: 
	I0826 04:10:43.268803    3738 fix.go:112] recreateIfNeeded on multinode-143000: state=Stopped err=<nil>
	W0826 04:10:43.268813    3738 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 04:10:43.273954    3738 out.go:177] * Restarting existing qemu2 VM for "multinode-143000" ...
	I0826 04:10:43.280933    3738 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:10:43.280969    3738 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/multinode-143000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/multinode-143000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/multinode-143000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:bf:f1:7e:d7:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/multinode-143000/disk.qcow2
	I0826 04:10:43.283071    3738 main.go:141] libmachine: STDOUT: 
	I0826 04:10:43.283092    3738 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:10:43.283121    3738 fix.go:56] duration metric: took 14.461333ms for fixHost
	I0826 04:10:43.283126    3738 start.go:83] releasing machines lock for "multinode-143000", held for 14.474ms
	W0826 04:10:43.283133    3738 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0826 04:10:43.283163    3738 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:10:43.283168    3738 start.go:729] Will try again in 5 seconds ...
	I0826 04:10:48.285249    3738 start.go:360] acquireMachinesLock for multinode-143000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:10:48.285661    3738 start.go:364] duration metric: took 324.709µs to acquireMachinesLock for "multinode-143000"
	I0826 04:10:48.285792    3738 start.go:96] Skipping create...Using existing machine configuration
	I0826 04:10:48.285809    3738 fix.go:54] fixHost starting: 
	I0826 04:10:48.286615    3738 fix.go:112] recreateIfNeeded on multinode-143000: state=Stopped err=<nil>
	W0826 04:10:48.286640    3738 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 04:10:48.291056    3738 out.go:177] * Restarting existing qemu2 VM for "multinode-143000" ...
	I0826 04:10:48.298957    3738 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:10:48.299162    3738 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/multinode-143000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/multinode-143000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/multinode-143000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:bf:f1:7e:d7:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/multinode-143000/disk.qcow2
	I0826 04:10:48.308276    3738 main.go:141] libmachine: STDOUT: 
	I0826 04:10:48.308355    3738 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:10:48.308433    3738 fix.go:56] duration metric: took 22.624833ms for fixHost
	I0826 04:10:48.308453    3738 start.go:83] releasing machines lock for "multinode-143000", held for 22.763208ms
	W0826 04:10:48.308617    3738 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-143000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-143000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:10:48.316029    3738 out.go:201] 
	W0826 04:10:48.320075    3738 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0826 04:10:48.320098    3738 out.go:270] * 
	* 
	W0826 04:10:48.323044    3738 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0826 04:10:48.330804    3738 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-143000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-143000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-143000 -n multinode-143000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-143000 -n multinode-143000: exit status 7 (32.198333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-143000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.98s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-143000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-143000 node delete m03: exit status 83 (40.329625ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-143000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-143000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-143000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-143000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-143000 status --alsologtostderr: exit status 7 (29.538916ms)

                                                
                                                
-- stdout --
	multinode-143000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:10:48.514465    3755 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:10:48.514605    3755 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:10:48.514611    3755 out.go:358] Setting ErrFile to fd 2...
	I0826 04:10:48.514613    3755 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:10:48.514730    3755 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:10:48.514850    3755 out.go:352] Setting JSON to false
	I0826 04:10:48.514859    3755 mustload.go:65] Loading cluster: multinode-143000
	I0826 04:10:48.514926    3755 notify.go:220] Checking for updates...
	I0826 04:10:48.515041    3755 config.go:182] Loaded profile config "multinode-143000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:10:48.515047    3755 status.go:255] checking status of multinode-143000 ...
	I0826 04:10:48.515245    3755 status.go:330] multinode-143000 host status = "Stopped" (err=<nil>)
	I0826 04:10:48.515249    3755 status.go:343] host is not running, skipping remaining checks
	I0826 04:10:48.515252    3755 status.go:257] multinode-143000 status: &{Name:multinode-143000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-143000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-143000 -n multinode-143000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-143000 -n multinode-143000: exit status 7 (30.052125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-143000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (1.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-143000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-143000 stop: (1.800470125s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-143000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-143000 status: exit status 7 (67.775042ms)

                                                
                                                
-- stdout --
	multinode-143000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-143000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-143000 status --alsologtostderr: exit status 7 (32.717958ms)

                                                
                                                
-- stdout --
	multinode-143000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:10:50.446266    3771 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:10:50.446404    3771 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:10:50.446407    3771 out.go:358] Setting ErrFile to fd 2...
	I0826 04:10:50.446409    3771 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:10:50.446540    3771 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:10:50.446660    3771 out.go:352] Setting JSON to false
	I0826 04:10:50.446670    3771 mustload.go:65] Loading cluster: multinode-143000
	I0826 04:10:50.446726    3771 notify.go:220] Checking for updates...
	I0826 04:10:50.446884    3771 config.go:182] Loaded profile config "multinode-143000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:10:50.446889    3771 status.go:255] checking status of multinode-143000 ...
	I0826 04:10:50.447099    3771 status.go:330] multinode-143000 host status = "Stopped" (err=<nil>)
	I0826 04:10:50.447103    3771 status.go:343] host is not running, skipping remaining checks
	I0826 04:10:50.447106    3771 status.go:257] multinode-143000 status: &{Name:multinode-143000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-143000 status --alsologtostderr": multinode-143000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-143000 status --alsologtostderr": multinode-143000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-143000 -n multinode-143000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-143000 -n multinode-143000: exit status 7 (29.975333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-143000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (1.93s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-143000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-143000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.188640291s)

                                                
                                                
-- stdout --
	* [multinode-143000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-143000" primary control-plane node in "multinode-143000" cluster
	* Restarting existing qemu2 VM for "multinode-143000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-143000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:10:50.506803    3775 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:10:50.506930    3775 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:10:50.506933    3775 out.go:358] Setting ErrFile to fd 2...
	I0826 04:10:50.506935    3775 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:10:50.507081    3775 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:10:50.508083    3775 out.go:352] Setting JSON to false
	I0826 04:10:50.524419    3775 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2413,"bootTime":1724668237,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0826 04:10:50.524492    3775 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0826 04:10:50.529317    3775 out.go:177] * [multinode-143000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0826 04:10:50.537990    3775 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 04:10:50.538039    3775 notify.go:220] Checking for updates...
	I0826 04:10:50.544896    3775 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	I0826 04:10:50.547973    3775 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0826 04:10:50.550904    3775 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 04:10:50.553937    3775 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	I0826 04:10:50.556983    3775 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 04:10:50.560264    3775 config.go:182] Loaded profile config "multinode-143000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:10:50.560515    3775 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 04:10:50.564943    3775 out.go:177] * Using the qemu2 driver based on existing profile
	I0826 04:10:50.570920    3775 start.go:297] selected driver: qemu2
	I0826 04:10:50.570927    3775 start.go:901] validating driver "qemu2" against &{Name:multinode-143000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:multinode-143000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 04:10:50.570981    3775 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 04:10:50.573345    3775 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 04:10:50.573388    3775 cni.go:84] Creating CNI manager for ""
	I0826 04:10:50.573394    3775 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0826 04:10:50.573437    3775 start.go:340] cluster config:
	{Name:multinode-143000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-143000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 04:10:50.576852    3775 iso.go:125] acquiring lock: {Name:mk859bee1c7de58c8a10e75b01bd87b0e1e74bdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:10:50.584934    3775 out.go:177] * Starting "multinode-143000" primary control-plane node in "multinode-143000" cluster
	I0826 04:10:50.588935    3775 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0826 04:10:50.588947    3775 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0826 04:10:50.588955    3775 cache.go:56] Caching tarball of preloaded images
	I0826 04:10:50.589002    3775 preload.go:172] Found /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0826 04:10:50.589007    3775 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0826 04:10:50.589061    3775 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/multinode-143000/config.json ...
	I0826 04:10:50.589505    3775 start.go:360] acquireMachinesLock for multinode-143000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:10:50.589534    3775 start.go:364] duration metric: took 22.875µs to acquireMachinesLock for "multinode-143000"
	I0826 04:10:50.589542    3775 start.go:96] Skipping create...Using existing machine configuration
	I0826 04:10:50.589549    3775 fix.go:54] fixHost starting: 
	I0826 04:10:50.589669    3775 fix.go:112] recreateIfNeeded on multinode-143000: state=Stopped err=<nil>
	W0826 04:10:50.589677    3775 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 04:10:50.597947    3775 out.go:177] * Restarting existing qemu2 VM for "multinode-143000" ...
	I0826 04:10:50.601909    3775 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:10:50.601944    3775 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/multinode-143000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/multinode-143000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/multinode-143000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:bf:f1:7e:d7:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/multinode-143000/disk.qcow2
	I0826 04:10:50.604010    3775 main.go:141] libmachine: STDOUT: 
	I0826 04:10:50.604036    3775 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:10:50.604064    3775 fix.go:56] duration metric: took 14.516416ms for fixHost
	I0826 04:10:50.604069    3775 start.go:83] releasing machines lock for "multinode-143000", held for 14.5315ms
	W0826 04:10:50.604075    3775 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0826 04:10:50.604116    3775 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:10:50.604121    3775 start.go:729] Will try again in 5 seconds ...
	I0826 04:10:55.606182    3775 start.go:360] acquireMachinesLock for multinode-143000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:10:55.606607    3775 start.go:364] duration metric: took 334.875µs to acquireMachinesLock for "multinode-143000"
	I0826 04:10:55.607194    3775 start.go:96] Skipping create...Using existing machine configuration
	I0826 04:10:55.607219    3775 fix.go:54] fixHost starting: 
	I0826 04:10:55.607900    3775 fix.go:112] recreateIfNeeded on multinode-143000: state=Stopped err=<nil>
	W0826 04:10:55.607931    3775 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 04:10:55.616337    3775 out.go:177] * Restarting existing qemu2 VM for "multinode-143000" ...
	I0826 04:10:55.619304    3775 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:10:55.619448    3775 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/multinode-143000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/multinode-143000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/multinode-143000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:bf:f1:7e:d7:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/multinode-143000/disk.qcow2
	I0826 04:10:55.628379    3775 main.go:141] libmachine: STDOUT: 
	I0826 04:10:55.628444    3775 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:10:55.628509    3775 fix.go:56] duration metric: took 21.292125ms for fixHost
	I0826 04:10:55.628530    3775 start.go:83] releasing machines lock for "multinode-143000", held for 21.894375ms
	W0826 04:10:55.628762    3775 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-143000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-143000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:10:55.635400    3775 out.go:201] 
	W0826 04:10:55.639372    3775 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0826 04:10:55.639398    3775 out.go:270] * 
	* 
	W0826 04:10:55.641896    3775 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0826 04:10:55.654305    3775 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-143000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-143000 -n multinode-143000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-143000 -n multinode-143000: exit status 7 (70.706459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-143000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-143000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-143000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-143000-m01 --driver=qemu2 : exit status 80 (9.987466083s)

                                                
                                                
-- stdout --
	* [multinode-143000-m01] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-143000-m01" primary control-plane node in "multinode-143000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-143000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-143000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-143000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-143000-m02 --driver=qemu2 : exit status 80 (10.287016041s)

                                                
                                                
-- stdout --
	* [multinode-143000-m02] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-143000-m02" primary control-plane node in "multinode-143000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-143000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-143000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-143000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-143000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-143000: exit status 83 (79.954333ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-143000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-143000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-143000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-143000 -n multinode-143000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-143000 -n multinode-143000: exit status 7 (29.814208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-143000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.50s)

                                                
                                    
x
+
TestPreload (10.21s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-466000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-466000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (10.064252s)

                                                
                                                
-- stdout --
	* [test-preload-466000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-466000" primary control-plane node in "test-preload-466000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-466000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:11:16.379891    3831 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:11:16.380020    3831 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:11:16.380023    3831 out.go:358] Setting ErrFile to fd 2...
	I0826 04:11:16.380025    3831 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:11:16.380158    3831 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:11:16.381210    3831 out.go:352] Setting JSON to false
	I0826 04:11:16.397395    3831 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2439,"bootTime":1724668237,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0826 04:11:16.397484    3831 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0826 04:11:16.402374    3831 out.go:177] * [test-preload-466000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0826 04:11:16.409395    3831 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 04:11:16.409431    3831 notify.go:220] Checking for updates...
	I0826 04:11:16.416327    3831 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	I0826 04:11:16.419335    3831 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0826 04:11:16.422353    3831 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 04:11:16.425307    3831 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	I0826 04:11:16.429447    3831 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 04:11:16.432725    3831 config.go:182] Loaded profile config "multinode-143000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:11:16.432778    3831 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 04:11:16.436324    3831 out.go:177] * Using the qemu2 driver based on user configuration
	I0826 04:11:16.443354    3831 start.go:297] selected driver: qemu2
	I0826 04:11:16.443361    3831 start.go:901] validating driver "qemu2" against <nil>
	I0826 04:11:16.443378    3831 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 04:11:16.445682    3831 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0826 04:11:16.447281    3831 out.go:177] * Automatically selected the socket_vmnet network
	I0826 04:11:16.451389    3831 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 04:11:16.451438    3831 cni.go:84] Creating CNI manager for ""
	I0826 04:11:16.451447    3831 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0826 04:11:16.451457    3831 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0826 04:11:16.451482    3831 start.go:340] cluster config:
	{Name:test-preload-466000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-466000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 04:11:16.455204    3831 iso.go:125] acquiring lock: {Name:mk859bee1c7de58c8a10e75b01bd87b0e1e74bdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:11:16.462338    3831 out.go:177] * Starting "test-preload-466000" primary control-plane node in "test-preload-466000" cluster
	I0826 04:11:16.466289    3831 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0826 04:11:16.466384    3831 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/test-preload-466000/config.json ...
	I0826 04:11:16.466411    3831 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/test-preload-466000/config.json: {Name:mk51aa15d8a0734cf979712a470b104c828f0196 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 04:11:16.466418    3831 cache.go:107] acquiring lock: {Name:mkdfecd2c249d21bf4ba9a955a6cf08754c7d400 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:11:16.466440    3831 cache.go:107] acquiring lock: {Name:mk96ad98e2934b5f13a9336a0a378914116f479f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:11:16.466462    3831 cache.go:107] acquiring lock: {Name:mk9e762e92e22da3a8935c99a6388638a42eb05f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:11:16.466414    3831 cache.go:107] acquiring lock: {Name:mkfe81449abbfddf650d25598bf3ba7a0320672e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:11:16.466611    3831 cache.go:107] acquiring lock: {Name:mkc9b753cd45d7dc404abbf25a9f2ff0d6282ed6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:11:16.466645    3831 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0826 04:11:16.466666    3831 cache.go:107] acquiring lock: {Name:mk532273f489ff1182ca05fcf68b0ff7709a2bb1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:11:16.466680    3831 cache.go:107] acquiring lock: {Name:mkbac63dbf3fbfb5752c4e54b4ead4b8cb7217a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:11:16.466757    3831 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 04:11:16.466774    3831 cache.go:107] acquiring lock: {Name:mk35c31c2dfa55b87636b393d7a217cd1ae39879 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:11:16.466800    3831 start.go:360] acquireMachinesLock for test-preload-466000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:11:16.466816    3831 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0826 04:11:16.466821    3831 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0826 04:11:16.466871    3831 start.go:364] duration metric: took 62.25µs to acquireMachinesLock for "test-preload-466000"
	I0826 04:11:16.466885    3831 start.go:93] Provisioning new machine with config: &{Name:test-preload-466000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-466000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0826 04:11:16.466937    3831 start.go:125] createHost starting for "" (driver="qemu2")
	I0826 04:11:16.466944    3831 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0826 04:11:16.466992    3831 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0826 04:11:16.466993    3831 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0826 04:11:16.466871    3831 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0826 04:11:16.470303    3831 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0826 04:11:16.477873    3831 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 04:11:16.477952    3831 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0826 04:11:16.478205    3831 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0826 04:11:16.478307    3831 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0826 04:11:16.479664    3831 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0826 04:11:16.479672    3831 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0826 04:11:16.479716    3831 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0826 04:11:16.480057    3831 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0826 04:11:16.488461    3831 start.go:159] libmachine.API.Create for "test-preload-466000" (driver="qemu2")
	I0826 04:11:16.488480    3831 client.go:168] LocalClient.Create starting
	I0826 04:11:16.488549    3831 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem
	I0826 04:11:16.488581    3831 main.go:141] libmachine: Decoding PEM data...
	I0826 04:11:16.488589    3831 main.go:141] libmachine: Parsing certificate...
	I0826 04:11:16.488628    3831 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/cert.pem
	I0826 04:11:16.488654    3831 main.go:141] libmachine: Decoding PEM data...
	I0826 04:11:16.488662    3831 main.go:141] libmachine: Parsing certificate...
	I0826 04:11:16.489031    3831 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19501-1045/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0826 04:11:16.646003    3831 main.go:141] libmachine: Creating SSH key...
	I0826 04:11:16.936144    3831 main.go:141] libmachine: Creating Disk image...
	I0826 04:11:16.936169    3831 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0826 04:11:16.936352    3831 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/test-preload-466000/disk.qcow2.raw /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/test-preload-466000/disk.qcow2
	I0826 04:11:16.946247    3831 main.go:141] libmachine: STDOUT: 
	I0826 04:11:16.946271    3831 main.go:141] libmachine: STDERR: 
	I0826 04:11:16.946320    3831 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/test-preload-466000/disk.qcow2 +20000M
	I0826 04:11:16.954826    3831 main.go:141] libmachine: STDOUT: Image resized.
	
	I0826 04:11:16.954845    3831 main.go:141] libmachine: STDERR: 
	I0826 04:11:16.954874    3831 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/test-preload-466000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/test-preload-466000/disk.qcow2
	I0826 04:11:16.954878    3831 main.go:141] libmachine: Starting QEMU VM...
	I0826 04:11:16.954897    3831 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:11:16.954934    3831 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/test-preload-466000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/test-preload-466000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/test-preload-466000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:d6:42:7a:32:fa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/test-preload-466000/disk.qcow2
	I0826 04:11:16.956898    3831 main.go:141] libmachine: STDOUT: 
	I0826 04:11:16.956912    3831 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:11:16.956929    3831 client.go:171] duration metric: took 468.454291ms to LocalClient.Create
	I0826 04:11:17.051815    3831 cache.go:162] opening:  /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0826 04:11:17.089796    3831 cache.go:162] opening:  /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0826 04:11:17.098228    3831 cache.go:162] opening:  /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	W0826 04:11:17.098271    3831 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0826 04:11:17.098306    3831 cache.go:162] opening:  /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0826 04:11:17.138773    3831 cache.go:162] opening:  /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0826 04:11:17.149008    3831 cache.go:162] opening:  /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0826 04:11:17.210085    3831 cache.go:162] opening:  /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0826 04:11:17.215213    3831 cache.go:157] /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0826 04:11:17.215240    3831 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 748.807917ms
	I0826 04:11:17.215271    3831 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0826 04:11:17.377380    3831 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0826 04:11:17.377467    3831 cache.go:162] opening:  /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0826 04:11:17.709233    3831 cache.go:157] /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0826 04:11:17.709276    3831 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.242877208s
	I0826 04:11:17.709299    3831 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0826 04:11:18.957079    3831 start.go:128] duration metric: took 2.490160208s to createHost
	I0826 04:11:18.957145    3831 start.go:83] releasing machines lock for "test-preload-466000", held for 2.490303792s
	W0826 04:11:18.957201    3831 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:11:18.974288    3831 out.go:177] * Deleting "test-preload-466000" in qemu2 ...
	W0826 04:11:19.008428    3831 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:11:19.008455    3831 start.go:729] Will try again in 5 seconds ...
	I0826 04:11:19.181376    3831 cache.go:157] /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0826 04:11:19.181443    3831 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 2.714863583s
	I0826 04:11:19.181479    3831 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0826 04:11:19.653163    3831 cache.go:157] /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0826 04:11:19.653220    3831 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.1867105s
	I0826 04:11:19.653244    3831 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0826 04:11:21.191239    3831 cache.go:157] /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0826 04:11:21.191290    3831 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.724963125s
	I0826 04:11:21.191314    3831 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0826 04:11:22.319852    3831 cache.go:157] /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0826 04:11:22.319904    3831 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.853558291s
	I0826 04:11:22.319958    3831 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0826 04:11:23.497814    3831 cache.go:157] /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0826 04:11:23.497862    3831 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 7.031197916s
	I0826 04:11:23.497891    3831 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0826 04:11:24.008687    3831 start.go:360] acquireMachinesLock for test-preload-466000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:11:24.009122    3831 start.go:364] duration metric: took 367.125µs to acquireMachinesLock for "test-preload-466000"
	I0826 04:11:24.009242    3831 start.go:93] Provisioning new machine with config: &{Name:test-preload-466000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-466000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0826 04:11:24.009542    3831 start.go:125] createHost starting for "" (driver="qemu2")
	I0826 04:11:24.017089    3831 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0826 04:11:24.067300    3831 start.go:159] libmachine.API.Create for "test-preload-466000" (driver="qemu2")
	I0826 04:11:24.067377    3831 client.go:168] LocalClient.Create starting
	I0826 04:11:24.067562    3831 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem
	I0826 04:11:24.067627    3831 main.go:141] libmachine: Decoding PEM data...
	I0826 04:11:24.067650    3831 main.go:141] libmachine: Parsing certificate...
	I0826 04:11:24.067713    3831 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/cert.pem
	I0826 04:11:24.067762    3831 main.go:141] libmachine: Decoding PEM data...
	I0826 04:11:24.067772    3831 main.go:141] libmachine: Parsing certificate...
	I0826 04:11:24.068295    3831 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19501-1045/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0826 04:11:24.232334    3831 main.go:141] libmachine: Creating SSH key...
	I0826 04:11:24.343516    3831 main.go:141] libmachine: Creating Disk image...
	I0826 04:11:24.343522    3831 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0826 04:11:24.343685    3831 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/test-preload-466000/disk.qcow2.raw /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/test-preload-466000/disk.qcow2
	I0826 04:11:24.353124    3831 main.go:141] libmachine: STDOUT: 
	I0826 04:11:24.353143    3831 main.go:141] libmachine: STDERR: 
	I0826 04:11:24.353190    3831 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/test-preload-466000/disk.qcow2 +20000M
	I0826 04:11:24.361364    3831 main.go:141] libmachine: STDOUT: Image resized.
	
	I0826 04:11:24.361398    3831 main.go:141] libmachine: STDERR: 
	I0826 04:11:24.361408    3831 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/test-preload-466000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/test-preload-466000/disk.qcow2
	I0826 04:11:24.361418    3831 main.go:141] libmachine: Starting QEMU VM...
	I0826 04:11:24.361430    3831 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:11:24.361460    3831 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/test-preload-466000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/test-preload-466000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/test-preload-466000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:70:6e:9c:6e:25 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/test-preload-466000/disk.qcow2
	I0826 04:11:24.363125    3831 main.go:141] libmachine: STDOUT: 
	I0826 04:11:24.363142    3831 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:11:24.363155    3831 client.go:171] duration metric: took 295.758416ms to LocalClient.Create
	I0826 04:11:26.261584    3831 cache.go:157] /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0826 04:11:26.261659    3831 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 9.795167708s
	I0826 04:11:26.261726    3831 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0826 04:11:26.261784    3831 cache.go:87] Successfully saved all images to host disk.
	I0826 04:11:26.365359    3831 start.go:128] duration metric: took 2.355823875s to createHost
	I0826 04:11:26.365431    3831 start.go:83] releasing machines lock for "test-preload-466000", held for 2.35632375s
	W0826 04:11:26.365683    3831 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-466000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-466000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:11:26.376263    3831 out.go:201] 
	W0826 04:11:26.386481    3831 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0826 04:11:26.386537    3831 out.go:270] * 
	* 
	W0826 04:11:26.388860    3831 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0826 04:11:26.400248    3831 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-466000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-08-26 04:11:26.418966 -0700 PDT m=+2214.052913251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-466000 -n test-preload-466000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-466000 -n test-preload-466000: exit status 7 (65.755167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-466000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-466000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-466000
--- FAIL: TestPreload (10.21s)

                                                
                                    
x
+
TestScheduledStopUnix (9.93s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-652000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-652000 --memory=2048 --driver=qemu2 : exit status 80 (9.781647875s)

                                                
                                                
-- stdout --
	* [scheduled-stop-652000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-652000" primary control-plane node in "scheduled-stop-652000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-652000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-652000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-652000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-652000" primary control-plane node in "scheduled-stop-652000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-652000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-652000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-08-26 04:11:36.344933 -0700 PDT m=+2223.979042084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-652000 -n scheduled-stop-652000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-652000 -n scheduled-stop-652000: exit status 7 (68.07875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-652000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-652000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-652000
--- FAIL: TestScheduledStopUnix (9.93s)

                                                
                                    
x
+
TestSkaffold (12.5s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe1102969790 version
skaffold_test.go:59: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe1102969790 version: (1.057436667s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-364000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-364000 --memory=2600 --driver=qemu2 : exit status 80 (9.986812416s)

                                                
                                                
-- stdout --
	* [skaffold-364000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-364000" primary control-plane node in "skaffold-364000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-364000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-364000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-364000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-364000" primary control-plane node in "skaffold-364000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-364000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-364000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-08-26 04:11:48.841818 -0700 PDT m=+2236.476130793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-364000 -n skaffold-364000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-364000 -n skaffold-364000: exit status 7 (63.470417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-364000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-364000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-364000
--- FAIL: TestSkaffold (12.50s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (645.39s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1047994844 start -p running-upgrade-798000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1047994844 start -p running-upgrade-798000 --memory=2200 --vm-driver=qemu2 : (1m1.877643417s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-798000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0826 04:13:21.655582    1539 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/addons-293000/client.crt: no such file or directory" logger="UnhandledError"
E0826 04:14:43.779253    1539 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/functional-690000/client.crt: no such file or directory" logger="UnhandledError"
E0826 04:16:24.752122    1539 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/addons-293000/client.crt: no such file or directory" logger="UnhandledError"
E0826 04:18:21.650731    1539 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/addons-293000/client.crt: no such file or directory" logger="UnhandledError"
E0826 04:19:43.775138    1539 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/functional-690000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-798000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9m8.758976042s)

                                                
                                                
-- stdout --
	* [running-upgrade-798000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-798000" primary control-plane node in "running-upgrade-798000" cluster
	* Updating the running qemu2 "running-upgrade-798000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:13:14.770701    4157 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:13:14.770859    4157 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:13:14.770862    4157 out.go:358] Setting ErrFile to fd 2...
	I0826 04:13:14.770865    4157 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:13:14.771005    4157 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:13:14.772038    4157 out.go:352] Setting JSON to false
	I0826 04:13:14.788856    4157 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2557,"bootTime":1724668237,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0826 04:13:14.788926    4157 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0826 04:13:14.793784    4157 out.go:177] * [running-upgrade-798000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0826 04:13:14.800923    4157 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 04:13:14.800987    4157 notify.go:220] Checking for updates...
	I0826 04:13:14.809767    4157 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	I0826 04:13:14.812689    4157 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0826 04:13:14.815780    4157 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 04:13:14.818812    4157 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	I0826 04:13:14.820201    4157 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 04:13:14.824045    4157 config.go:182] Loaded profile config "running-upgrade-798000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0826 04:13:14.827759    4157 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0826 04:13:14.830782    4157 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 04:13:14.834744    4157 out.go:177] * Using the qemu2 driver based on existing profile
	I0826 04:13:14.841779    4157 start.go:297] selected driver: qemu2
	I0826 04:13:14.841788    4157 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-798000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50342 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-798000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0826 04:13:14.841860    4157 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 04:13:14.844247    4157 cni.go:84] Creating CNI manager for ""
	I0826 04:13:14.844266    4157 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0826 04:13:14.844293    4157 start.go:340] cluster config:
	{Name:running-upgrade-798000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50342 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-798000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0826 04:13:14.844345    4157 iso.go:125] acquiring lock: {Name:mk859bee1c7de58c8a10e75b01bd87b0e1e74bdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:13:14.852780    4157 out.go:177] * Starting "running-upgrade-798000" primary control-plane node in "running-upgrade-798000" cluster
	I0826 04:13:14.856782    4157 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0826 04:13:14.856800    4157 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0826 04:13:14.856806    4157 cache.go:56] Caching tarball of preloaded images
	I0826 04:13:14.856867    4157 preload.go:172] Found /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0826 04:13:14.856873    4157 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0826 04:13:14.856962    4157 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/running-upgrade-798000/config.json ...
	I0826 04:13:14.857472    4157 start.go:360] acquireMachinesLock for running-upgrade-798000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:13:34.926293    4157 start.go:364] duration metric: took 20.069137167s to acquireMachinesLock for "running-upgrade-798000"
	I0826 04:13:34.926333    4157 start.go:96] Skipping create...Using existing machine configuration
	I0826 04:13:34.926348    4157 fix.go:54] fixHost starting: 
	I0826 04:13:34.927124    4157 fix.go:112] recreateIfNeeded on running-upgrade-798000: state=Running err=<nil>
	W0826 04:13:34.927133    4157 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 04:13:34.933284    4157 out.go:177] * Updating the running qemu2 "running-upgrade-798000" VM ...
	I0826 04:13:34.937314    4157 machine.go:93] provisionDockerMachine start ...
	I0826 04:13:34.937361    4157 main.go:141] libmachine: Using SSH client type: native
	I0826 04:13:34.937490    4157 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1050005a0] 0x105002e00 <nil>  [] 0s} localhost 50266 <nil> <nil>}
	I0826 04:13:34.937495    4157 main.go:141] libmachine: About to run SSH command:
	hostname
	I0826 04:13:34.986690    4157 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-798000
	
	I0826 04:13:34.986704    4157 buildroot.go:166] provisioning hostname "running-upgrade-798000"
	I0826 04:13:34.986743    4157 main.go:141] libmachine: Using SSH client type: native
	I0826 04:13:34.986862    4157 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1050005a0] 0x105002e00 <nil>  [] 0s} localhost 50266 <nil> <nil>}
	I0826 04:13:34.986868    4157 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-798000 && echo "running-upgrade-798000" | sudo tee /etc/hostname
	I0826 04:13:35.039895    4157 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-798000
	
	I0826 04:13:35.039945    4157 main.go:141] libmachine: Using SSH client type: native
	I0826 04:13:35.040076    4157 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1050005a0] 0x105002e00 <nil>  [] 0s} localhost 50266 <nil> <nil>}
	I0826 04:13:35.040085    4157 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-798000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-798000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-798000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0826 04:13:35.090775    4157 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 04:13:35.090787    4157 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19501-1045/.minikube CaCertPath:/Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19501-1045/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19501-1045/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19501-1045/.minikube}
	I0826 04:13:35.090801    4157 buildroot.go:174] setting up certificates
	I0826 04:13:35.090806    4157 provision.go:84] configureAuth start
	I0826 04:13:35.090813    4157 provision.go:143] copyHostCerts
	I0826 04:13:35.090888    4157 exec_runner.go:144] found /Users/jenkins/minikube-integration/19501-1045/.minikube/ca.pem, removing ...
	I0826 04:13:35.090895    4157 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19501-1045/.minikube/ca.pem
	I0826 04:13:35.091014    4157 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19501-1045/.minikube/ca.pem (1082 bytes)
	I0826 04:13:35.091195    4157 exec_runner.go:144] found /Users/jenkins/minikube-integration/19501-1045/.minikube/cert.pem, removing ...
	I0826 04:13:35.091200    4157 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19501-1045/.minikube/cert.pem
	I0826 04:13:35.091247    4157 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19501-1045/.minikube/cert.pem (1123 bytes)
	I0826 04:13:35.091343    4157 exec_runner.go:144] found /Users/jenkins/minikube-integration/19501-1045/.minikube/key.pem, removing ...
	I0826 04:13:35.091347    4157 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19501-1045/.minikube/key.pem
	I0826 04:13:35.091384    4157 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19501-1045/.minikube/key.pem (1675 bytes)
	I0826 04:13:35.091474    4157 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-798000 san=[127.0.0.1 localhost minikube running-upgrade-798000]
	I0826 04:13:35.281563    4157 provision.go:177] copyRemoteCerts
	I0826 04:13:35.281605    4157 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0826 04:13:35.281618    4157 sshutil.go:53] new ssh client: &{IP:localhost Port:50266 SSHKeyPath:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/running-upgrade-798000/id_rsa Username:docker}
	I0826 04:13:35.309584    4157 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0826 04:13:35.316679    4157 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0826 04:13:35.325850    4157 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0826 04:13:35.332765    4157 provision.go:87] duration metric: took 241.958291ms to configureAuth
	I0826 04:13:35.332775    4157 buildroot.go:189] setting minikube options for container-runtime
	I0826 04:13:35.332884    4157 config.go:182] Loaded profile config "running-upgrade-798000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0826 04:13:35.332924    4157 main.go:141] libmachine: Using SSH client type: native
	I0826 04:13:35.333013    4157 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1050005a0] 0x105002e00 <nil>  [] 0s} localhost 50266 <nil> <nil>}
	I0826 04:13:35.333018    4157 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0826 04:13:35.383922    4157 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0826 04:13:35.383932    4157 buildroot.go:70] root file system type: tmpfs
	I0826 04:13:35.383982    4157 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0826 04:13:35.384037    4157 main.go:141] libmachine: Using SSH client type: native
	I0826 04:13:35.384154    4157 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1050005a0] 0x105002e00 <nil>  [] 0s} localhost 50266 <nil> <nil>}
	I0826 04:13:35.384188    4157 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0826 04:13:35.437933    4157 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0826 04:13:35.437995    4157 main.go:141] libmachine: Using SSH client type: native
	I0826 04:13:35.438107    4157 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1050005a0] 0x105002e00 <nil>  [] 0s} localhost 50266 <nil> <nil>}
	I0826 04:13:35.438116    4157 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0826 04:13:35.491574    4157 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 04:13:35.491586    4157 machine.go:96] duration metric: took 554.275666ms to provisionDockerMachine
	I0826 04:13:35.491592    4157 start.go:293] postStartSetup for "running-upgrade-798000" (driver="qemu2")
	I0826 04:13:35.491599    4157 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0826 04:13:35.491656    4157 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0826 04:13:35.491665    4157 sshutil.go:53] new ssh client: &{IP:localhost Port:50266 SSHKeyPath:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/running-upgrade-798000/id_rsa Username:docker}
	I0826 04:13:35.517570    4157 ssh_runner.go:195] Run: cat /etc/os-release
	I0826 04:13:35.518940    4157 info.go:137] Remote host: Buildroot 2021.02.12
	I0826 04:13:35.518948    4157 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19501-1045/.minikube/addons for local assets ...
	I0826 04:13:35.519019    4157 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19501-1045/.minikube/files for local assets ...
	I0826 04:13:35.519108    4157 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19501-1045/.minikube/files/etc/ssl/certs/15392.pem -> 15392.pem in /etc/ssl/certs
	I0826 04:13:35.519194    4157 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0826 04:13:35.521735    4157 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19501-1045/.minikube/files/etc/ssl/certs/15392.pem --> /etc/ssl/certs/15392.pem (1708 bytes)
	I0826 04:13:35.529750    4157 start.go:296] duration metric: took 38.1515ms for postStartSetup
	I0826 04:13:35.529765    4157 fix.go:56] duration metric: took 603.432708ms for fixHost
	I0826 04:13:35.529815    4157 main.go:141] libmachine: Using SSH client type: native
	I0826 04:13:35.529925    4157 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1050005a0] 0x105002e00 <nil>  [] 0s} localhost 50266 <nil> <nil>}
	I0826 04:13:35.529930    4157 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0826 04:13:35.581120    4157 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724670815.599441787
	
	I0826 04:13:35.581134    4157 fix.go:216] guest clock: 1724670815.599441787
	I0826 04:13:35.581138    4157 fix.go:229] Guest: 2024-08-26 04:13:35.599441787 -0700 PDT Remote: 2024-08-26 04:13:35.529767 -0700 PDT m=+20.778730710 (delta=69.674787ms)
	I0826 04:13:35.581155    4157 fix.go:200] guest clock delta is within tolerance: 69.674787ms
	I0826 04:13:35.581161    4157 start.go:83] releasing machines lock for "running-upgrade-798000", held for 654.867ms
	I0826 04:13:35.581225    4157 ssh_runner.go:195] Run: cat /version.json
	I0826 04:13:35.581227    4157 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0826 04:13:35.581234    4157 sshutil.go:53] new ssh client: &{IP:localhost Port:50266 SSHKeyPath:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/running-upgrade-798000/id_rsa Username:docker}
	I0826 04:13:35.581243    4157 sshutil.go:53] new ssh client: &{IP:localhost Port:50266 SSHKeyPath:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/running-upgrade-798000/id_rsa Username:docker}
	W0826 04:13:35.581832    4157 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50266: connect: connection refused
	I0826 04:13:35.581856    4157 retry.go:31] will retry after 362.022998ms: dial tcp [::1]:50266: connect: connection refused
	W0826 04:13:35.978853    4157 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0826 04:13:35.979008    4157 ssh_runner.go:195] Run: systemctl --version
	I0826 04:13:35.982241    4157 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0826 04:13:35.984802    4157 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0826 04:13:35.984845    4157 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0826 04:13:35.989225    4157 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0826 04:13:35.995391    4157 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0826 04:13:35.995401    4157 start.go:495] detecting cgroup driver to use...
	I0826 04:13:35.995486    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0826 04:13:36.002261    4157 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0826 04:13:36.005978    4157 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0826 04:13:36.009338    4157 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0826 04:13:36.009365    4157 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0826 04:13:36.012566    4157 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0826 04:13:36.015620    4157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0826 04:13:36.018438    4157 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0826 04:13:36.021328    4157 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0826 04:13:36.024214    4157 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0826 04:13:36.027214    4157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0826 04:13:36.030172    4157 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0826 04:13:36.033010    4157 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0826 04:13:36.036081    4157 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0826 04:13:36.038984    4157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 04:13:36.143297    4157 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0826 04:13:36.151029    4157 start.go:495] detecting cgroup driver to use...
	I0826 04:13:36.151102    4157 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0826 04:13:36.157657    4157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0826 04:13:36.163430    4157 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0826 04:13:36.171020    4157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0826 04:13:36.175902    4157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0826 04:13:36.180697    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0826 04:13:36.186391    4157 ssh_runner.go:195] Run: which cri-dockerd
	I0826 04:13:36.187656    4157 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0826 04:13:36.190628    4157 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0826 04:13:36.195739    4157 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0826 04:13:36.303207    4157 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0826 04:13:36.402199    4157 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0826 04:13:36.402260    4157 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0826 04:13:36.407385    4157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 04:13:36.510086    4157 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0826 04:13:58.095794    4157 ssh_runner.go:235] Completed: sudo systemctl restart docker: (21.58603625s)
	I0826 04:13:58.095862    4157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0826 04:13:58.100616    4157 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0826 04:13:58.110142    4157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0826 04:13:58.114809    4157 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0826 04:13:58.203537    4157 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0826 04:13:58.296147    4157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 04:13:58.384907    4157 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0826 04:13:58.391134    4157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0826 04:13:58.395980    4157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 04:13:58.473193    4157 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0826 04:13:58.513728    4157 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0826 04:13:58.513802    4157 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0826 04:13:58.515818    4157 start.go:563] Will wait 60s for crictl version
	I0826 04:13:58.515864    4157 ssh_runner.go:195] Run: which crictl
	I0826 04:13:58.517590    4157 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0826 04:13:58.529597    4157 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0826 04:13:58.529672    4157 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0826 04:13:58.542189    4157 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0826 04:13:58.566569    4157 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0826 04:13:58.566697    4157 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0826 04:13:58.568184    4157 kubeadm.go:883] updating cluster {Name:running-upgrade-798000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50342 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-798000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0826 04:13:58.568229    4157 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0826 04:13:58.568268    4157 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0826 04:13:58.578316    4157 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0826 04:13:58.578326    4157 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0826 04:13:58.578372    4157 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0826 04:13:58.581474    4157 ssh_runner.go:195] Run: which lz4
	I0826 04:13:58.582763    4157 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0826 04:13:58.584095    4157 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0826 04:13:58.584107    4157 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0826 04:13:59.532191    4157 docker.go:649] duration metric: took 949.469833ms to copy over tarball
	I0826 04:13:59.532247    4157 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0826 04:14:00.756615    4157 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.224373583s)
	I0826 04:14:00.756633    4157 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0826 04:14:00.772687    4157 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0826 04:14:00.776137    4157 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0826 04:14:00.781251    4157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 04:14:00.863012    4157 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0826 04:14:02.046099    4157 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.183090875s)
	I0826 04:14:02.046191    4157 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0826 04:14:02.059875    4157 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0826 04:14:02.059886    4157 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0826 04:14:02.059890    4157 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0826 04:14:02.065222    4157 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 04:14:02.066972    4157 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0826 04:14:02.068416    4157 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 04:14:02.068424    4157 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0826 04:14:02.069889    4157 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0826 04:14:02.070068    4157 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0826 04:14:02.071403    4157 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0826 04:14:02.071551    4157 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0826 04:14:02.072315    4157 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0826 04:14:02.072671    4157 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0826 04:14:02.073825    4157 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0826 04:14:02.073856    4157 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0826 04:14:02.075031    4157 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0826 04:14:02.075444    4157 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0826 04:14:02.076484    4157 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0826 04:14:02.077601    4157 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0826 04:14:02.461540    4157 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0826 04:14:02.472127    4157 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0826 04:14:02.472154    4157 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0826 04:14:02.472205    4157 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0826 04:14:02.482860    4157 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0826 04:14:02.489862    4157 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0826 04:14:02.490228    4157 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0826 04:14:02.496732    4157 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0826 04:14:02.501873    4157 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0826 04:14:02.501893    4157 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0826 04:14:02.501936    4157 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0826 04:14:02.505979    4157 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0826 04:14:02.505997    4157 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0826 04:14:02.506035    4157 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0826 04:14:02.512136    4157 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0826 04:14:02.512156    4157 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0826 04:14:02.512193    4157 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0826 04:14:02.521030    4157 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0826 04:14:02.524234    4157 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0826 04:14:02.527926    4157 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0826 04:14:02.533452    4157 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0826 04:14:02.533559    4157 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0826 04:14:02.539522    4157 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0826 04:14:02.539542    4157 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0826 04:14:02.539584    4157 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0826 04:14:02.539585    4157 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0826 04:14:02.539598    4157 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0826 04:14:02.557369    4157 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0826 04:14:02.564817    4157 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	W0826 04:14:02.594322    4157 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0826 04:14:02.594466    4157 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0826 04:14:02.597275    4157 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0826 04:14:02.597295    4157 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0826 04:14:02.597349    4157 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0826 04:14:02.655886    4157 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0826 04:14:02.655929    4157 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0826 04:14:02.656059    4157 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0826 04:14:02.659998    4157 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0826 04:14:02.660110    4157 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	W0826 04:14:02.667504    4157 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0826 04:14:02.667596    4157 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 04:14:02.710783    4157 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0826 04:14:02.710845    4157 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0826 04:14:02.710884    4157 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0826 04:14:02.710920    4157 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0826 04:14:02.710922    4157 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0826 04:14:02.711013    4157 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 04:14:02.711050    4157 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 04:14:02.748452    4157 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0826 04:14:02.748483    4157 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0826 04:14:02.757015    4157 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0826 04:14:02.757031    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0826 04:14:02.819528    4157 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0826 04:14:02.882660    4157 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0826 04:14:02.882673    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0826 04:14:02.969999    4157 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0826 04:14:02.970029    4157 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0826 04:14:02.970036    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0826 04:14:03.119430    4157 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0826 04:14:03.119468    4157 cache_images.go:92] duration metric: took 1.0595855s to LoadCachedImages
	W0826 04:14:03.119510    4157 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0826 04:14:03.119516    4157 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0826 04:14:03.119569    4157 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-798000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-798000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0826 04:14:03.119636    4157 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0826 04:14:03.133407    4157 cni.go:84] Creating CNI manager for ""
	I0826 04:14:03.133421    4157 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0826 04:14:03.133429    4157 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0826 04:14:03.133438    4157 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-798000 NodeName:running-upgrade-798000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0826 04:14:03.133499    4157 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-798000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0826 04:14:03.133564    4157 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0826 04:14:03.136418    4157 binaries.go:44] Found k8s binaries, skipping transfer
	I0826 04:14:03.136448    4157 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0826 04:14:03.139758    4157 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0826 04:14:03.145227    4157 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0826 04:14:03.150079    4157 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0826 04:14:03.155589    4157 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0826 04:14:03.157099    4157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 04:14:03.248674    4157 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 04:14:03.254233    4157 certs.go:68] Setting up /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/running-upgrade-798000 for IP: 10.0.2.15
	I0826 04:14:03.254240    4157 certs.go:194] generating shared ca certs ...
	I0826 04:14:03.254248    4157 certs.go:226] acquiring lock for ca certs: {Name:mk94fc9641f4dd57ada21caac2320dd5698e14b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 04:14:03.254408    4157 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19501-1045/.minikube/ca.key
	I0826 04:14:03.254458    4157 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19501-1045/.minikube/proxy-client-ca.key
	I0826 04:14:03.254463    4157 certs.go:256] generating profile certs ...
	I0826 04:14:03.254540    4157 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/running-upgrade-798000/client.key
	I0826 04:14:03.254556    4157 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/running-upgrade-798000/apiserver.key.25f2b150
	I0826 04:14:03.254570    4157 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/running-upgrade-798000/apiserver.crt.25f2b150 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0826 04:14:03.291920    4157 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/running-upgrade-798000/apiserver.crt.25f2b150 ...
	I0826 04:14:03.291925    4157 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/running-upgrade-798000/apiserver.crt.25f2b150: {Name:mke40743c3b29e12464b1eeb353a802055950125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 04:14:03.292340    4157 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/running-upgrade-798000/apiserver.key.25f2b150 ...
	I0826 04:14:03.292347    4157 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/running-upgrade-798000/apiserver.key.25f2b150: {Name:mk32263ffcc11b75605e41ed4bfa173dc001847f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 04:14:03.292511    4157 certs.go:381] copying /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/running-upgrade-798000/apiserver.crt.25f2b150 -> /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/running-upgrade-798000/apiserver.crt
	I0826 04:14:03.292652    4157 certs.go:385] copying /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/running-upgrade-798000/apiserver.key.25f2b150 -> /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/running-upgrade-798000/apiserver.key
	I0826 04:14:03.292800    4157 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/running-upgrade-798000/proxy-client.key
	I0826 04:14:03.292926    4157 certs.go:484] found cert: /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/1539.pem (1338 bytes)
	W0826 04:14:03.292960    4157 certs.go:480] ignoring /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/1539_empty.pem, impossibly tiny 0 bytes
	I0826 04:14:03.292966    4157 certs.go:484] found cert: /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca-key.pem (1675 bytes)
	I0826 04:14:03.292991    4157 certs.go:484] found cert: /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem (1082 bytes)
	I0826 04:14:03.293017    4157 certs.go:484] found cert: /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/cert.pem (1123 bytes)
	I0826 04:14:03.293045    4157 certs.go:484] found cert: /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/key.pem (1675 bytes)
	I0826 04:14:03.293101    4157 certs.go:484] found cert: /Users/jenkins/minikube-integration/19501-1045/.minikube/files/etc/ssl/certs/15392.pem (1708 bytes)
	I0826 04:14:03.293444    4157 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19501-1045/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0826 04:14:03.300366    4157 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19501-1045/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0826 04:14:03.307891    4157 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19501-1045/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0826 04:14:03.315839    4157 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19501-1045/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0826 04:14:03.324343    4157 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/running-upgrade-798000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0826 04:14:03.331966    4157 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/running-upgrade-798000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0826 04:14:03.341375    4157 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/running-upgrade-798000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0826 04:14:03.348255    4157 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/running-upgrade-798000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0826 04:14:03.355692    4157 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/1539.pem --> /usr/share/ca-certificates/1539.pem (1338 bytes)
	I0826 04:14:03.362809    4157 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19501-1045/.minikube/files/etc/ssl/certs/15392.pem --> /usr/share/ca-certificates/15392.pem (1708 bytes)
	I0826 04:14:03.369518    4157 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19501-1045/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0826 04:14:03.376289    4157 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0826 04:14:03.381927    4157 ssh_runner.go:195] Run: openssl version
	I0826 04:14:03.383929    4157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1539.pem && ln -fs /usr/share/ca-certificates/1539.pem /etc/ssl/certs/1539.pem"
	I0826 04:14:03.387733    4157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1539.pem
	I0826 04:14:03.389321    4157 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 26 10:42 /usr/share/ca-certificates/1539.pem
	I0826 04:14:03.389341    4157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1539.pem
	I0826 04:14:03.391298    4157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1539.pem /etc/ssl/certs/51391683.0"
	I0826 04:14:03.394442    4157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15392.pem && ln -fs /usr/share/ca-certificates/15392.pem /etc/ssl/certs/15392.pem"
	I0826 04:14:03.397436    4157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15392.pem
	I0826 04:14:03.398904    4157 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 26 10:42 /usr/share/ca-certificates/15392.pem
	I0826 04:14:03.398921    4157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15392.pem
	I0826 04:14:03.400954    4157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15392.pem /etc/ssl/certs/3ec20f2e.0"
	I0826 04:14:03.404040    4157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0826 04:14:03.407653    4157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0826 04:14:03.409286    4157 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 26 10:35 /usr/share/ca-certificates/minikubeCA.pem
	I0826 04:14:03.409311    4157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0826 04:14:03.411028    4157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0826 04:14:03.413996    4157 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0826 04:14:03.415808    4157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0826 04:14:03.418157    4157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0826 04:14:03.420030    4157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0826 04:14:03.422119    4157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0826 04:14:03.423917    4157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0826 04:14:03.425650    4157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0826 04:14:03.427398    4157 kubeadm.go:392] StartCluster: {Name:running-upgrade-798000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50342 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-798000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0826 04:14:03.427473    4157 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0826 04:14:03.437741    4157 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0826 04:14:03.440957    4157 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0826 04:14:03.440962    4157 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0826 04:14:03.440984    4157 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0826 04:14:03.444406    4157 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0826 04:14:03.444711    4157 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-798000" does not appear in /Users/jenkins/minikube-integration/19501-1045/kubeconfig
	I0826 04:14:03.444810    4157 kubeconfig.go:62] /Users/jenkins/minikube-integration/19501-1045/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-798000" cluster setting kubeconfig missing "running-upgrade-798000" context setting]
	I0826 04:14:03.444986    4157 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19501-1045/kubeconfig: {Name:mk689667536e8273d65b27bdc18d08f46d2d09b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 04:14:03.445455    4157 kapi.go:59] client config for running-upgrade-798000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/running-upgrade-798000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/running-upgrade-798000/client.key", CAFile:"/Users/jenkins/minikube-integration/19501-1045/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1065bbd30), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0826 04:14:03.445791    4157 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0826 04:14:03.448543    4157 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-798000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0826 04:14:03.448549    4157 kubeadm.go:1160] stopping kube-system containers ...
	I0826 04:14:03.448592    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0826 04:14:03.460201    4157 docker.go:483] Stopping containers: [8ec905c2494f cb531344e36e 5f83772e205f d3eeaec7b527 b412218e02dc 728052cc7045 edd62acc2f9e 76153893296e 1c0e257e495c 248906c4a556 baa7e11bb517 dc8f312bf0d6 c78986548653 f951c7963ddb 925977f20843 ed78253ab832 c9ca783bb30a fe2f6206972a 60ff6adfd778 6c9889306b12]
	I0826 04:14:03.460262    4157 ssh_runner.go:195] Run: docker stop 8ec905c2494f cb531344e36e 5f83772e205f d3eeaec7b527 b412218e02dc 728052cc7045 edd62acc2f9e 76153893296e 1c0e257e495c 248906c4a556 baa7e11bb517 dc8f312bf0d6 c78986548653 f951c7963ddb 925977f20843 ed78253ab832 c9ca783bb30a fe2f6206972a 60ff6adfd778 6c9889306b12
	I0826 04:14:03.478144    4157 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0826 04:14:03.563776    4157 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 04:14:03.567411    4157 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Aug 26 11:13 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Aug 26 11:13 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Aug 26 11:13 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Aug 26 11:13 /etc/kubernetes/scheduler.conf
	
	I0826 04:14:03.567448    4157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50342 /etc/kubernetes/admin.conf
	I0826 04:14:03.570489    4157 kubeadm.go:163] "https://control-plane.minikube.internal:50342" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50342 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0826 04:14:03.570522    4157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 04:14:03.573049    4157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50342 /etc/kubernetes/kubelet.conf
	I0826 04:14:03.575976    4157 kubeadm.go:163] "https://control-plane.minikube.internal:50342" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50342 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0826 04:14:03.575997    4157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 04:14:03.579130    4157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50342 /etc/kubernetes/controller-manager.conf
	I0826 04:14:03.582126    4157 kubeadm.go:163] "https://control-plane.minikube.internal:50342" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50342 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0826 04:14:03.582146    4157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 04:14:03.584774    4157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50342 /etc/kubernetes/scheduler.conf
	I0826 04:14:03.587914    4157 kubeadm.go:163] "https://control-plane.minikube.internal:50342" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50342 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0826 04:14:03.587932    4157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 04:14:03.591007    4157 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 04:14:03.593961    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 04:14:03.624557    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 04:14:04.006425    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0826 04:14:04.214903    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 04:14:04.236364    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0826 04:14:04.258535    4157 api_server.go:52] waiting for apiserver process to appear ...
	I0826 04:14:04.258619    4157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 04:14:04.759068    4157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 04:14:05.260941    4157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 04:14:05.760759    4157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 04:14:06.260666    4157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 04:14:06.759157    4157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 04:14:07.260732    4157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 04:14:07.760654    4157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 04:14:07.787097    4157 api_server.go:72] duration metric: took 3.528617417s to wait for apiserver process to appear ...
	I0826 04:14:07.787116    4157 api_server.go:88] waiting for apiserver healthz status ...
	I0826 04:14:07.787126    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:14:12.789135    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:14:12.789157    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:14:17.789346    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:14:17.789384    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:14:22.789696    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:14:22.789726    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:14:27.790186    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:14:27.790263    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:14:32.791366    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:14:32.791405    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:14:37.791759    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:14:37.791806    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:14:42.792397    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:14:42.792466    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:14:47.793769    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:14:47.793830    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:14:52.795460    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:14:52.795484    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:14:57.797140    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:14:57.797200    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:15:02.799682    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:15:02.799764    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:15:07.802180    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:15:07.802607    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:15:07.830747    4157 logs.go:276] 2 containers: [ebaf0ab8ed6e 728052cc7045]
	I0826 04:15:07.830888    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:15:07.848560    4157 logs.go:276] 2 containers: [9bff8c79fce6 edd62acc2f9e]
	I0826 04:15:07.848660    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:15:07.861943    4157 logs.go:276] 2 containers: [1238564fbc88 cb531344e36e]
	I0826 04:15:07.862029    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:15:07.874135    4157 logs.go:276] 2 containers: [48dfaf968d22 c9ca783bb30a]
	I0826 04:15:07.874235    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:15:07.885546    4157 logs.go:276] 2 containers: [bb14b3493df5 c78986548653]
	I0826 04:15:07.885620    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:15:07.895973    4157 logs.go:276] 2 containers: [7a6cc2a39c7e 248906c4a556]
	I0826 04:15:07.896041    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:15:07.906122    4157 logs.go:276] 0 containers: []
	W0826 04:15:07.906133    4157 logs.go:278] No container was found matching "kindnet"
	I0826 04:15:07.906192    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:15:07.917204    4157 logs.go:276] 2 containers: [a5fe322a216b d3eeaec7b527]
	I0826 04:15:07.917220    4157 logs.go:123] Gathering logs for Docker ...
	I0826 04:15:07.917228    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:15:07.943866    4157 logs.go:123] Gathering logs for kube-apiserver [728052cc7045] ...
	I0826 04:15:07.943872    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 728052cc7045"
	I0826 04:15:07.984918    4157 logs.go:123] Gathering logs for coredns [1238564fbc88] ...
	I0826 04:15:07.984932    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1238564fbc88"
	I0826 04:15:08.002757    4157 logs.go:123] Gathering logs for kube-scheduler [c9ca783bb30a] ...
	I0826 04:15:08.002771    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9ca783bb30a"
	I0826 04:15:08.017900    4157 logs.go:123] Gathering logs for kube-proxy [bb14b3493df5] ...
	I0826 04:15:08.017909    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb14b3493df5"
	I0826 04:15:08.029459    4157 logs.go:123] Gathering logs for kube-controller-manager [7a6cc2a39c7e] ...
	I0826 04:15:08.029469    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6cc2a39c7e"
	I0826 04:15:08.045937    4157 logs.go:123] Gathering logs for dmesg ...
	I0826 04:15:08.045947    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:15:08.050188    4157 logs.go:123] Gathering logs for kube-apiserver [ebaf0ab8ed6e] ...
	I0826 04:15:08.050196    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebaf0ab8ed6e"
	I0826 04:15:08.064462    4157 logs.go:123] Gathering logs for coredns [cb531344e36e] ...
	I0826 04:15:08.064474    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb531344e36e"
	I0826 04:15:08.076246    4157 logs.go:123] Gathering logs for etcd [9bff8c79fce6] ...
	I0826 04:15:08.076257    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bff8c79fce6"
	I0826 04:15:08.089751    4157 logs.go:123] Gathering logs for kube-proxy [c78986548653] ...
	I0826 04:15:08.089767    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c78986548653"
	I0826 04:15:08.101449    4157 logs.go:123] Gathering logs for storage-provisioner [a5fe322a216b] ...
	I0826 04:15:08.101463    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5fe322a216b"
	I0826 04:15:08.113143    4157 logs.go:123] Gathering logs for storage-provisioner [d3eeaec7b527] ...
	I0826 04:15:08.113155    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eeaec7b527"
	I0826 04:15:08.124620    4157 logs.go:123] Gathering logs for container status ...
	I0826 04:15:08.124630    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:15:08.138595    4157 logs.go:123] Gathering logs for kubelet ...
	I0826 04:15:08.138611    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:15:08.180440    4157 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:15:08.180452    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:15:08.253545    4157 logs.go:123] Gathering logs for etcd [edd62acc2f9e] ...
	I0826 04:15:08.253556    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edd62acc2f9e"
	I0826 04:15:08.269156    4157 logs.go:123] Gathering logs for kube-scheduler [48dfaf968d22] ...
	I0826 04:15:08.269166    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48dfaf968d22"
	I0826 04:15:08.281007    4157 logs.go:123] Gathering logs for kube-controller-manager [248906c4a556] ...
	I0826 04:15:08.281017    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248906c4a556"
	I0826 04:15:10.799305    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:15:15.801864    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:15:15.802151    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:15:15.833644    4157 logs.go:276] 2 containers: [ebaf0ab8ed6e 728052cc7045]
	I0826 04:15:15.833735    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:15:15.848516    4157 logs.go:276] 2 containers: [9bff8c79fce6 edd62acc2f9e]
	I0826 04:15:15.848601    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:15:15.862674    4157 logs.go:276] 2 containers: [1238564fbc88 cb531344e36e]
	I0826 04:15:15.862748    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:15:15.873174    4157 logs.go:276] 2 containers: [48dfaf968d22 c9ca783bb30a]
	I0826 04:15:15.873248    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:15:15.883994    4157 logs.go:276] 2 containers: [bb14b3493df5 c78986548653]
	I0826 04:15:15.884073    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:15:15.894214    4157 logs.go:276] 2 containers: [7a6cc2a39c7e 248906c4a556]
	I0826 04:15:15.894310    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:15:15.904377    4157 logs.go:276] 0 containers: []
	W0826 04:15:15.904387    4157 logs.go:278] No container was found matching "kindnet"
	I0826 04:15:15.904445    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:15:15.915052    4157 logs.go:276] 2 containers: [a5fe322a216b d3eeaec7b527]
	I0826 04:15:15.915068    4157 logs.go:123] Gathering logs for coredns [1238564fbc88] ...
	I0826 04:15:15.915073    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1238564fbc88"
	I0826 04:15:15.926280    4157 logs.go:123] Gathering logs for coredns [cb531344e36e] ...
	I0826 04:15:15.926292    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb531344e36e"
	I0826 04:15:15.938498    4157 logs.go:123] Gathering logs for kube-proxy [bb14b3493df5] ...
	I0826 04:15:15.938509    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb14b3493df5"
	I0826 04:15:15.950474    4157 logs.go:123] Gathering logs for kube-controller-manager [7a6cc2a39c7e] ...
	I0826 04:15:15.950487    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6cc2a39c7e"
	I0826 04:15:15.973791    4157 logs.go:123] Gathering logs for kube-controller-manager [248906c4a556] ...
	I0826 04:15:15.973803    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248906c4a556"
	I0826 04:15:15.994243    4157 logs.go:123] Gathering logs for storage-provisioner [a5fe322a216b] ...
	I0826 04:15:15.994260    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5fe322a216b"
	I0826 04:15:16.005743    4157 logs.go:123] Gathering logs for storage-provisioner [d3eeaec7b527] ...
	I0826 04:15:16.005754    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eeaec7b527"
	I0826 04:15:16.019888    4157 logs.go:123] Gathering logs for kubelet ...
	I0826 04:15:16.019899    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:15:16.061416    4157 logs.go:123] Gathering logs for kube-apiserver [ebaf0ab8ed6e] ...
	I0826 04:15:16.061428    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebaf0ab8ed6e"
	I0826 04:15:16.083848    4157 logs.go:123] Gathering logs for kube-scheduler [48dfaf968d22] ...
	I0826 04:15:16.083863    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48dfaf968d22"
	I0826 04:15:16.095025    4157 logs.go:123] Gathering logs for kube-scheduler [c9ca783bb30a] ...
	I0826 04:15:16.095036    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9ca783bb30a"
	I0826 04:15:16.111008    4157 logs.go:123] Gathering logs for dmesg ...
	I0826 04:15:16.111021    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:15:16.115826    4157 logs.go:123] Gathering logs for etcd [9bff8c79fce6] ...
	I0826 04:15:16.115833    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bff8c79fce6"
	I0826 04:15:16.129244    4157 logs.go:123] Gathering logs for kube-proxy [c78986548653] ...
	I0826 04:15:16.129258    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c78986548653"
	I0826 04:15:16.140867    4157 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:15:16.140879    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:15:16.176201    4157 logs.go:123] Gathering logs for kube-apiserver [728052cc7045] ...
	I0826 04:15:16.176215    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 728052cc7045"
	I0826 04:15:16.213952    4157 logs.go:123] Gathering logs for etcd [edd62acc2f9e] ...
	I0826 04:15:16.213961    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edd62acc2f9e"
	I0826 04:15:16.228582    4157 logs.go:123] Gathering logs for Docker ...
	I0826 04:15:16.228592    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:15:16.255826    4157 logs.go:123] Gathering logs for container status ...
	I0826 04:15:16.255834    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:15:18.769782    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:15:23.772210    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:15:23.772445    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:15:23.802236    4157 logs.go:276] 2 containers: [ebaf0ab8ed6e 728052cc7045]
	I0826 04:15:23.802365    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:15:23.820549    4157 logs.go:276] 2 containers: [9bff8c79fce6 edd62acc2f9e]
	I0826 04:15:23.820635    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:15:23.834391    4157 logs.go:276] 2 containers: [1238564fbc88 cb531344e36e]
	I0826 04:15:23.834468    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:15:23.846894    4157 logs.go:276] 2 containers: [48dfaf968d22 c9ca783bb30a]
	I0826 04:15:23.846964    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:15:23.857258    4157 logs.go:276] 2 containers: [bb14b3493df5 c78986548653]
	I0826 04:15:23.857331    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:15:23.868050    4157 logs.go:276] 2 containers: [7a6cc2a39c7e 248906c4a556]
	I0826 04:15:23.868114    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:15:23.878126    4157 logs.go:276] 0 containers: []
	W0826 04:15:23.878139    4157 logs.go:278] No container was found matching "kindnet"
	I0826 04:15:23.878191    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:15:23.888779    4157 logs.go:276] 2 containers: [a5fe322a216b d3eeaec7b527]
	I0826 04:15:23.888795    4157 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:15:23.888801    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:15:23.923137    4157 logs.go:123] Gathering logs for coredns [cb531344e36e] ...
	I0826 04:15:23.923150    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb531344e36e"
	I0826 04:15:23.935130    4157 logs.go:123] Gathering logs for kubelet ...
	I0826 04:15:23.935141    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:15:23.974080    4157 logs.go:123] Gathering logs for kube-apiserver [ebaf0ab8ed6e] ...
	I0826 04:15:23.974088    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebaf0ab8ed6e"
	I0826 04:15:23.988403    4157 logs.go:123] Gathering logs for kube-apiserver [728052cc7045] ...
	I0826 04:15:23.988415    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 728052cc7045"
	I0826 04:15:24.025890    4157 logs.go:123] Gathering logs for etcd [edd62acc2f9e] ...
	I0826 04:15:24.025900    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edd62acc2f9e"
	I0826 04:15:24.044516    4157 logs.go:123] Gathering logs for kube-scheduler [c9ca783bb30a] ...
	I0826 04:15:24.044529    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9ca783bb30a"
	I0826 04:15:24.059611    4157 logs.go:123] Gathering logs for dmesg ...
	I0826 04:15:24.059625    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:15:24.064584    4157 logs.go:123] Gathering logs for etcd [9bff8c79fce6] ...
	I0826 04:15:24.064590    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bff8c79fce6"
	I0826 04:15:24.078246    4157 logs.go:123] Gathering logs for kube-proxy [c78986548653] ...
	I0826 04:15:24.078257    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c78986548653"
	I0826 04:15:24.098017    4157 logs.go:123] Gathering logs for storage-provisioner [a5fe322a216b] ...
	I0826 04:15:24.098027    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5fe322a216b"
	I0826 04:15:24.109300    4157 logs.go:123] Gathering logs for storage-provisioner [d3eeaec7b527] ...
	I0826 04:15:24.109313    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eeaec7b527"
	I0826 04:15:24.120949    4157 logs.go:123] Gathering logs for coredns [1238564fbc88] ...
	I0826 04:15:24.120962    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1238564fbc88"
	I0826 04:15:24.132628    4157 logs.go:123] Gathering logs for kube-scheduler [48dfaf968d22] ...
	I0826 04:15:24.132640    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48dfaf968d22"
	I0826 04:15:24.143853    4157 logs.go:123] Gathering logs for kube-proxy [bb14b3493df5] ...
	I0826 04:15:24.143863    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb14b3493df5"
	I0826 04:15:24.155400    4157 logs.go:123] Gathering logs for kube-controller-manager [7a6cc2a39c7e] ...
	I0826 04:15:24.155411    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6cc2a39c7e"
	I0826 04:15:24.172458    4157 logs.go:123] Gathering logs for kube-controller-manager [248906c4a556] ...
	I0826 04:15:24.172468    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248906c4a556"
	I0826 04:15:24.187178    4157 logs.go:123] Gathering logs for Docker ...
	I0826 04:15:24.187192    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:15:24.213169    4157 logs.go:123] Gathering logs for container status ...
	I0826 04:15:24.213177    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:15:26.725044    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:15:31.727417    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:15:31.727592    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:15:31.740172    4157 logs.go:276] 2 containers: [ebaf0ab8ed6e 728052cc7045]
	I0826 04:15:31.740247    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:15:31.750883    4157 logs.go:276] 2 containers: [9bff8c79fce6 edd62acc2f9e]
	I0826 04:15:31.750957    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:15:31.765710    4157 logs.go:276] 2 containers: [1238564fbc88 cb531344e36e]
	I0826 04:15:31.765796    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:15:31.775976    4157 logs.go:276] 2 containers: [48dfaf968d22 c9ca783bb30a]
	I0826 04:15:31.776041    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:15:31.786277    4157 logs.go:276] 2 containers: [bb14b3493df5 c78986548653]
	I0826 04:15:31.786354    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:15:31.799427    4157 logs.go:276] 2 containers: [7a6cc2a39c7e 248906c4a556]
	I0826 04:15:31.799497    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:15:31.817643    4157 logs.go:276] 0 containers: []
	W0826 04:15:31.817656    4157 logs.go:278] No container was found matching "kindnet"
	I0826 04:15:31.817717    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:15:31.828030    4157 logs.go:276] 2 containers: [a5fe322a216b d3eeaec7b527]
	I0826 04:15:31.828046    4157 logs.go:123] Gathering logs for kube-controller-manager [248906c4a556] ...
	I0826 04:15:31.828051    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248906c4a556"
	I0826 04:15:31.842506    4157 logs.go:123] Gathering logs for storage-provisioner [a5fe322a216b] ...
	I0826 04:15:31.842517    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5fe322a216b"
	I0826 04:15:31.853809    4157 logs.go:123] Gathering logs for kubelet ...
	I0826 04:15:31.853821    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:15:31.892496    4157 logs.go:123] Gathering logs for coredns [cb531344e36e] ...
	I0826 04:15:31.892503    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb531344e36e"
	I0826 04:15:31.904108    4157 logs.go:123] Gathering logs for kube-proxy [c78986548653] ...
	I0826 04:15:31.904118    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c78986548653"
	I0826 04:15:31.915735    4157 logs.go:123] Gathering logs for kube-controller-manager [7a6cc2a39c7e] ...
	I0826 04:15:31.915744    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6cc2a39c7e"
	I0826 04:15:31.932514    4157 logs.go:123] Gathering logs for container status ...
	I0826 04:15:31.932524    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:15:31.944853    4157 logs.go:123] Gathering logs for storage-provisioner [d3eeaec7b527] ...
	I0826 04:15:31.944862    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eeaec7b527"
	I0826 04:15:31.956846    4157 logs.go:123] Gathering logs for Docker ...
	I0826 04:15:31.956859    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:15:31.982068    4157 logs.go:123] Gathering logs for dmesg ...
	I0826 04:15:31.982077    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:15:31.986233    4157 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:15:31.986242    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:15:32.022613    4157 logs.go:123] Gathering logs for kube-apiserver [728052cc7045] ...
	I0826 04:15:32.022625    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 728052cc7045"
	I0826 04:15:32.060609    4157 logs.go:123] Gathering logs for etcd [edd62acc2f9e] ...
	I0826 04:15:32.060619    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edd62acc2f9e"
	I0826 04:15:32.077170    4157 logs.go:123] Gathering logs for kube-scheduler [c9ca783bb30a] ...
	I0826 04:15:32.077182    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9ca783bb30a"
	I0826 04:15:32.091919    4157 logs.go:123] Gathering logs for kube-apiserver [ebaf0ab8ed6e] ...
	I0826 04:15:32.091931    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebaf0ab8ed6e"
	I0826 04:15:32.105945    4157 logs.go:123] Gathering logs for etcd [9bff8c79fce6] ...
	I0826 04:15:32.105958    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bff8c79fce6"
	I0826 04:15:32.121996    4157 logs.go:123] Gathering logs for coredns [1238564fbc88] ...
	I0826 04:15:32.122007    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1238564fbc88"
	I0826 04:15:32.133879    4157 logs.go:123] Gathering logs for kube-scheduler [48dfaf968d22] ...
	I0826 04:15:32.133891    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48dfaf968d22"
	I0826 04:15:32.148226    4157 logs.go:123] Gathering logs for kube-proxy [bb14b3493df5] ...
	I0826 04:15:32.148240    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb14b3493df5"
	I0826 04:15:34.661612    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:15:39.663759    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:15:39.663947    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:15:39.680151    4157 logs.go:276] 2 containers: [ebaf0ab8ed6e 728052cc7045]
	I0826 04:15:39.680233    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:15:39.693152    4157 logs.go:276] 2 containers: [9bff8c79fce6 edd62acc2f9e]
	I0826 04:15:39.693233    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:15:39.704257    4157 logs.go:276] 2 containers: [1238564fbc88 cb531344e36e]
	I0826 04:15:39.704327    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:15:39.714939    4157 logs.go:276] 2 containers: [48dfaf968d22 c9ca783bb30a]
	I0826 04:15:39.715002    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:15:39.725941    4157 logs.go:276] 2 containers: [bb14b3493df5 c78986548653]
	I0826 04:15:39.726014    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:15:39.736694    4157 logs.go:276] 2 containers: [7a6cc2a39c7e 248906c4a556]
	I0826 04:15:39.736808    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:15:39.747296    4157 logs.go:276] 0 containers: []
	W0826 04:15:39.747306    4157 logs.go:278] No container was found matching "kindnet"
	I0826 04:15:39.747363    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:15:39.757399    4157 logs.go:276] 2 containers: [a5fe322a216b d3eeaec7b527]
	I0826 04:15:39.757410    4157 logs.go:123] Gathering logs for kube-scheduler [c9ca783bb30a] ...
	I0826 04:15:39.757415    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9ca783bb30a"
	I0826 04:15:39.775380    4157 logs.go:123] Gathering logs for kube-proxy [bb14b3493df5] ...
	I0826 04:15:39.775389    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb14b3493df5"
	I0826 04:15:39.786907    4157 logs.go:123] Gathering logs for kube-proxy [c78986548653] ...
	I0826 04:15:39.786919    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c78986548653"
	I0826 04:15:39.802131    4157 logs.go:123] Gathering logs for kube-controller-manager [248906c4a556] ...
	I0826 04:15:39.802141    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248906c4a556"
	I0826 04:15:39.817304    4157 logs.go:123] Gathering logs for kubelet ...
	I0826 04:15:39.817319    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:15:39.858367    4157 logs.go:123] Gathering logs for kube-scheduler [48dfaf968d22] ...
	I0826 04:15:39.858376    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48dfaf968d22"
	I0826 04:15:39.870210    4157 logs.go:123] Gathering logs for coredns [1238564fbc88] ...
	I0826 04:15:39.870228    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1238564fbc88"
	I0826 04:15:39.881517    4157 logs.go:123] Gathering logs for storage-provisioner [a5fe322a216b] ...
	I0826 04:15:39.881529    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5fe322a216b"
	I0826 04:15:39.893197    4157 logs.go:123] Gathering logs for storage-provisioner [d3eeaec7b527] ...
	I0826 04:15:39.893210    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eeaec7b527"
	I0826 04:15:39.904304    4157 logs.go:123] Gathering logs for kube-apiserver [ebaf0ab8ed6e] ...
	I0826 04:15:39.904313    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebaf0ab8ed6e"
	I0826 04:15:39.918216    4157 logs.go:123] Gathering logs for etcd [edd62acc2f9e] ...
	I0826 04:15:39.918229    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edd62acc2f9e"
	I0826 04:15:39.932339    4157 logs.go:123] Gathering logs for Docker ...
	I0826 04:15:39.932353    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:15:39.959502    4157 logs.go:123] Gathering logs for container status ...
	I0826 04:15:39.959512    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:15:39.973667    4157 logs.go:123] Gathering logs for dmesg ...
	I0826 04:15:39.973678    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:15:39.978168    4157 logs.go:123] Gathering logs for kube-apiserver [728052cc7045] ...
	I0826 04:15:39.978177    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 728052cc7045"
	I0826 04:15:40.015905    4157 logs.go:123] Gathering logs for coredns [cb531344e36e] ...
	I0826 04:15:40.015918    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb531344e36e"
	I0826 04:15:40.028033    4157 logs.go:123] Gathering logs for kube-controller-manager [7a6cc2a39c7e] ...
	I0826 04:15:40.028047    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6cc2a39c7e"
	I0826 04:15:40.044623    4157 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:15:40.044633    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:15:40.079784    4157 logs.go:123] Gathering logs for etcd [9bff8c79fce6] ...
	I0826 04:15:40.079796    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bff8c79fce6"
	I0826 04:15:42.594869    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:15:47.597146    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:15:47.597487    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:15:47.634320    4157 logs.go:276] 2 containers: [ebaf0ab8ed6e 728052cc7045]
	I0826 04:15:47.634451    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:15:47.658720    4157 logs.go:276] 2 containers: [9bff8c79fce6 edd62acc2f9e]
	I0826 04:15:47.658807    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:15:47.671967    4157 logs.go:276] 2 containers: [1238564fbc88 cb531344e36e]
	I0826 04:15:47.672050    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:15:47.683355    4157 logs.go:276] 2 containers: [48dfaf968d22 c9ca783bb30a]
	I0826 04:15:47.683425    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:15:47.698047    4157 logs.go:276] 2 containers: [bb14b3493df5 c78986548653]
	I0826 04:15:47.698114    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:15:47.708546    4157 logs.go:276] 2 containers: [7a6cc2a39c7e 248906c4a556]
	I0826 04:15:47.708620    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:15:47.719181    4157 logs.go:276] 0 containers: []
	W0826 04:15:47.719194    4157 logs.go:278] No container was found matching "kindnet"
	I0826 04:15:47.719253    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:15:47.729565    4157 logs.go:276] 2 containers: [a5fe322a216b d3eeaec7b527]
	I0826 04:15:47.729582    4157 logs.go:123] Gathering logs for etcd [9bff8c79fce6] ...
	I0826 04:15:47.729588    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bff8c79fce6"
	I0826 04:15:47.743215    4157 logs.go:123] Gathering logs for coredns [1238564fbc88] ...
	I0826 04:15:47.743225    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1238564fbc88"
	I0826 04:15:47.757816    4157 logs.go:123] Gathering logs for kube-proxy [c78986548653] ...
	I0826 04:15:47.757828    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c78986548653"
	I0826 04:15:47.769600    4157 logs.go:123] Gathering logs for kube-controller-manager [7a6cc2a39c7e] ...
	I0826 04:15:47.769611    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6cc2a39c7e"
	I0826 04:15:47.786856    4157 logs.go:123] Gathering logs for dmesg ...
	I0826 04:15:47.786866    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:15:47.791427    4157 logs.go:123] Gathering logs for kube-apiserver [728052cc7045] ...
	I0826 04:15:47.791435    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 728052cc7045"
	I0826 04:15:47.831373    4157 logs.go:123] Gathering logs for coredns [cb531344e36e] ...
	I0826 04:15:47.831384    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb531344e36e"
	I0826 04:15:47.842792    4157 logs.go:123] Gathering logs for kube-scheduler [c9ca783bb30a] ...
	I0826 04:15:47.842803    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9ca783bb30a"
	I0826 04:15:47.861728    4157 logs.go:123] Gathering logs for storage-provisioner [d3eeaec7b527] ...
	I0826 04:15:47.861741    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eeaec7b527"
	I0826 04:15:47.873349    4157 logs.go:123] Gathering logs for kube-scheduler [48dfaf968d22] ...
	I0826 04:15:47.873361    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48dfaf968d22"
	I0826 04:15:47.884664    4157 logs.go:123] Gathering logs for kube-controller-manager [248906c4a556] ...
	I0826 04:15:47.884674    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248906c4a556"
	I0826 04:15:47.899590    4157 logs.go:123] Gathering logs for Docker ...
	I0826 04:15:47.899602    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:15:47.924985    4157 logs.go:123] Gathering logs for container status ...
	I0826 04:15:47.924994    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:15:47.936542    4157 logs.go:123] Gathering logs for kubelet ...
	I0826 04:15:47.936558    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:15:47.975871    4157 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:15:47.975882    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:15:48.013348    4157 logs.go:123] Gathering logs for kube-apiserver [ebaf0ab8ed6e] ...
	I0826 04:15:48.013360    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebaf0ab8ed6e"
	I0826 04:15:48.028034    4157 logs.go:123] Gathering logs for etcd [edd62acc2f9e] ...
	I0826 04:15:48.028047    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edd62acc2f9e"
	I0826 04:15:48.042239    4157 logs.go:123] Gathering logs for kube-proxy [bb14b3493df5] ...
	I0826 04:15:48.042249    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb14b3493df5"
	I0826 04:15:48.054017    4157 logs.go:123] Gathering logs for storage-provisioner [a5fe322a216b] ...
	I0826 04:15:48.054030    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5fe322a216b"
	I0826 04:15:50.567962    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:15:55.570604    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:15:55.571042    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:15:55.608697    4157 logs.go:276] 2 containers: [ebaf0ab8ed6e 728052cc7045]
	I0826 04:15:55.608832    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:15:55.630282    4157 logs.go:276] 2 containers: [9bff8c79fce6 edd62acc2f9e]
	I0826 04:15:55.630378    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:15:55.644699    4157 logs.go:276] 2 containers: [1238564fbc88 cb531344e36e]
	I0826 04:15:55.644784    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:15:55.657209    4157 logs.go:276] 2 containers: [48dfaf968d22 c9ca783bb30a]
	I0826 04:15:55.657283    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:15:55.672427    4157 logs.go:276] 2 containers: [bb14b3493df5 c78986548653]
	I0826 04:15:55.672504    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:15:55.685318    4157 logs.go:276] 2 containers: [7a6cc2a39c7e 248906c4a556]
	I0826 04:15:55.685400    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:15:55.697388    4157 logs.go:276] 0 containers: []
	W0826 04:15:55.697401    4157 logs.go:278] No container was found matching "kindnet"
	I0826 04:15:55.697466    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:15:55.714206    4157 logs.go:276] 2 containers: [a5fe322a216b d3eeaec7b527]
	I0826 04:15:55.714225    4157 logs.go:123] Gathering logs for etcd [edd62acc2f9e] ...
	I0826 04:15:55.714231    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edd62acc2f9e"
	I0826 04:15:55.730208    4157 logs.go:123] Gathering logs for coredns [cb531344e36e] ...
	I0826 04:15:55.730218    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb531344e36e"
	I0826 04:15:55.741521    4157 logs.go:123] Gathering logs for dmesg ...
	I0826 04:15:55.741533    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:15:55.746051    4157 logs.go:123] Gathering logs for kube-apiserver [ebaf0ab8ed6e] ...
	I0826 04:15:55.746059    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebaf0ab8ed6e"
	I0826 04:15:55.759586    4157 logs.go:123] Gathering logs for kube-scheduler [48dfaf968d22] ...
	I0826 04:15:55.759596    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48dfaf968d22"
	I0826 04:15:55.771269    4157 logs.go:123] Gathering logs for kube-controller-manager [7a6cc2a39c7e] ...
	I0826 04:15:55.771279    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6cc2a39c7e"
	I0826 04:15:55.791091    4157 logs.go:123] Gathering logs for kube-controller-manager [248906c4a556] ...
	I0826 04:15:55.791103    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248906c4a556"
	I0826 04:15:55.807897    4157 logs.go:123] Gathering logs for storage-provisioner [d3eeaec7b527] ...
	I0826 04:15:55.807908    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eeaec7b527"
	I0826 04:15:55.819272    4157 logs.go:123] Gathering logs for kubelet ...
	I0826 04:15:55.819282    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:15:55.861523    4157 logs.go:123] Gathering logs for kube-apiserver [728052cc7045] ...
	I0826 04:15:55.861534    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 728052cc7045"
	I0826 04:15:55.899265    4157 logs.go:123] Gathering logs for kube-scheduler [c9ca783bb30a] ...
	I0826 04:15:55.899276    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9ca783bb30a"
	I0826 04:15:55.913872    4157 logs.go:123] Gathering logs for storage-provisioner [a5fe322a216b] ...
	I0826 04:15:55.913883    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5fe322a216b"
	I0826 04:15:55.925577    4157 logs.go:123] Gathering logs for Docker ...
	I0826 04:15:55.925587    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:15:55.949625    4157 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:15:55.949634    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:15:55.986694    4157 logs.go:123] Gathering logs for etcd [9bff8c79fce6] ...
	I0826 04:15:55.986707    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bff8c79fce6"
	I0826 04:15:56.000887    4157 logs.go:123] Gathering logs for coredns [1238564fbc88] ...
	I0826 04:15:56.000898    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1238564fbc88"
	I0826 04:15:56.011631    4157 logs.go:123] Gathering logs for kube-proxy [bb14b3493df5] ...
	I0826 04:15:56.011643    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb14b3493df5"
	I0826 04:15:56.023384    4157 logs.go:123] Gathering logs for kube-proxy [c78986548653] ...
	I0826 04:15:56.023394    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c78986548653"
	I0826 04:15:56.035119    4157 logs.go:123] Gathering logs for container status ...
	I0826 04:15:56.035131    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:15:58.548561    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:16:03.550829    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:16:03.550998    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:16:03.565860    4157 logs.go:276] 2 containers: [ebaf0ab8ed6e 728052cc7045]
	I0826 04:16:03.565934    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:16:03.576307    4157 logs.go:276] 2 containers: [9bff8c79fce6 edd62acc2f9e]
	I0826 04:16:03.576374    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:16:03.586757    4157 logs.go:276] 2 containers: [1238564fbc88 cb531344e36e]
	I0826 04:16:03.586826    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:16:03.603454    4157 logs.go:276] 2 containers: [48dfaf968d22 c9ca783bb30a]
	I0826 04:16:03.603529    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:16:03.614300    4157 logs.go:276] 2 containers: [bb14b3493df5 c78986548653]
	I0826 04:16:03.614366    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:16:03.625560    4157 logs.go:276] 2 containers: [7a6cc2a39c7e 248906c4a556]
	I0826 04:16:03.625630    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:16:03.635292    4157 logs.go:276] 0 containers: []
	W0826 04:16:03.635313    4157 logs.go:278] No container was found matching "kindnet"
	I0826 04:16:03.635371    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:16:03.645988    4157 logs.go:276] 2 containers: [a5fe322a216b d3eeaec7b527]
	I0826 04:16:03.646005    4157 logs.go:123] Gathering logs for Docker ...
	I0826 04:16:03.646011    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:16:03.670887    4157 logs.go:123] Gathering logs for dmesg ...
	I0826 04:16:03.670901    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:16:03.675832    4157 logs.go:123] Gathering logs for coredns [cb531344e36e] ...
	I0826 04:16:03.675841    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb531344e36e"
	I0826 04:16:03.687110    4157 logs.go:123] Gathering logs for kube-proxy [c78986548653] ...
	I0826 04:16:03.687122    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c78986548653"
	I0826 04:16:03.699536    4157 logs.go:123] Gathering logs for kube-controller-manager [248906c4a556] ...
	I0826 04:16:03.699548    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248906c4a556"
	I0826 04:16:03.714572    4157 logs.go:123] Gathering logs for kube-apiserver [728052cc7045] ...
	I0826 04:16:03.714585    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 728052cc7045"
	I0826 04:16:03.753052    4157 logs.go:123] Gathering logs for etcd [9bff8c79fce6] ...
	I0826 04:16:03.753072    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bff8c79fce6"
	I0826 04:16:03.767908    4157 logs.go:123] Gathering logs for kube-proxy [bb14b3493df5] ...
	I0826 04:16:03.767919    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb14b3493df5"
	I0826 04:16:03.780104    4157 logs.go:123] Gathering logs for kube-controller-manager [7a6cc2a39c7e] ...
	I0826 04:16:03.780124    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6cc2a39c7e"
	I0826 04:16:03.799369    4157 logs.go:123] Gathering logs for kubelet ...
	I0826 04:16:03.799382    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:16:03.841883    4157 logs.go:123] Gathering logs for kube-scheduler [c9ca783bb30a] ...
	I0826 04:16:03.841893    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9ca783bb30a"
	I0826 04:16:03.857566    4157 logs.go:123] Gathering logs for container status ...
	I0826 04:16:03.857579    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:16:03.872235    4157 logs.go:123] Gathering logs for kube-scheduler [48dfaf968d22] ...
	I0826 04:16:03.872248    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48dfaf968d22"
	I0826 04:16:03.884913    4157 logs.go:123] Gathering logs for storage-provisioner [a5fe322a216b] ...
	I0826 04:16:03.884928    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5fe322a216b"
	I0826 04:16:03.896797    4157 logs.go:123] Gathering logs for storage-provisioner [d3eeaec7b527] ...
	I0826 04:16:03.896809    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eeaec7b527"
	I0826 04:16:03.908650    4157 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:16:03.908662    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:16:03.943221    4157 logs.go:123] Gathering logs for kube-apiserver [ebaf0ab8ed6e] ...
	I0826 04:16:03.943232    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebaf0ab8ed6e"
	I0826 04:16:03.958603    4157 logs.go:123] Gathering logs for etcd [edd62acc2f9e] ...
	I0826 04:16:03.958615    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edd62acc2f9e"
	I0826 04:16:03.978454    4157 logs.go:123] Gathering logs for coredns [1238564fbc88] ...
	I0826 04:16:03.978467    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1238564fbc88"
	I0826 04:16:06.492641    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:16:11.493979    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:16:11.494440    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:16:11.532589    4157 logs.go:276] 2 containers: [ebaf0ab8ed6e 728052cc7045]
	I0826 04:16:11.532728    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:16:11.552900    4157 logs.go:276] 2 containers: [9bff8c79fce6 edd62acc2f9e]
	I0826 04:16:11.553006    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:16:11.567850    4157 logs.go:276] 2 containers: [1238564fbc88 cb531344e36e]
	I0826 04:16:11.567933    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:16:11.580509    4157 logs.go:276] 2 containers: [48dfaf968d22 c9ca783bb30a]
	I0826 04:16:11.580584    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:16:11.591335    4157 logs.go:276] 2 containers: [bb14b3493df5 c78986548653]
	I0826 04:16:11.591407    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:16:11.602146    4157 logs.go:276] 2 containers: [7a6cc2a39c7e 248906c4a556]
	I0826 04:16:11.602215    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:16:11.613541    4157 logs.go:276] 0 containers: []
	W0826 04:16:11.613552    4157 logs.go:278] No container was found matching "kindnet"
	I0826 04:16:11.613604    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:16:11.628042    4157 logs.go:276] 2 containers: [a5fe322a216b d3eeaec7b527]
	I0826 04:16:11.628062    4157 logs.go:123] Gathering logs for kube-apiserver [ebaf0ab8ed6e] ...
	I0826 04:16:11.628068    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebaf0ab8ed6e"
	I0826 04:16:11.641611    4157 logs.go:123] Gathering logs for storage-provisioner [d3eeaec7b527] ...
	I0826 04:16:11.641620    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eeaec7b527"
	I0826 04:16:11.657919    4157 logs.go:123] Gathering logs for Docker ...
	I0826 04:16:11.657931    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:16:11.683826    4157 logs.go:123] Gathering logs for container status ...
	I0826 04:16:11.683842    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:16:11.696180    4157 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:16:11.696195    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:16:11.730822    4157 logs.go:123] Gathering logs for kube-scheduler [c9ca783bb30a] ...
	I0826 04:16:11.730833    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9ca783bb30a"
	I0826 04:16:11.745409    4157 logs.go:123] Gathering logs for kube-proxy [c78986548653] ...
	I0826 04:16:11.745422    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c78986548653"
	I0826 04:16:11.757513    4157 logs.go:123] Gathering logs for storage-provisioner [a5fe322a216b] ...
	I0826 04:16:11.757524    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5fe322a216b"
	I0826 04:16:11.769503    4157 logs.go:123] Gathering logs for kube-proxy [bb14b3493df5] ...
	I0826 04:16:11.769513    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb14b3493df5"
	I0826 04:16:11.795032    4157 logs.go:123] Gathering logs for kube-controller-manager [248906c4a556] ...
	I0826 04:16:11.795043    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248906c4a556"
	I0826 04:16:11.837554    4157 logs.go:123] Gathering logs for dmesg ...
	I0826 04:16:11.837566    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:16:11.842165    4157 logs.go:123] Gathering logs for kube-apiserver [728052cc7045] ...
	I0826 04:16:11.842178    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 728052cc7045"
	I0826 04:16:11.882698    4157 logs.go:123] Gathering logs for etcd [edd62acc2f9e] ...
	I0826 04:16:11.882718    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edd62acc2f9e"
	I0826 04:16:11.898156    4157 logs.go:123] Gathering logs for kube-scheduler [48dfaf968d22] ...
	I0826 04:16:11.898174    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48dfaf968d22"
	I0826 04:16:11.915170    4157 logs.go:123] Gathering logs for kube-controller-manager [7a6cc2a39c7e] ...
	I0826 04:16:11.915183    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6cc2a39c7e"
	I0826 04:16:11.933517    4157 logs.go:123] Gathering logs for kubelet ...
	I0826 04:16:11.933532    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:16:11.977013    4157 logs.go:123] Gathering logs for etcd [9bff8c79fce6] ...
	I0826 04:16:11.977024    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bff8c79fce6"
	I0826 04:16:11.993267    4157 logs.go:123] Gathering logs for coredns [1238564fbc88] ...
	I0826 04:16:11.993277    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1238564fbc88"
	I0826 04:16:12.005091    4157 logs.go:123] Gathering logs for coredns [cb531344e36e] ...
	I0826 04:16:12.005103    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb531344e36e"
	I0826 04:16:14.517575    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:16:19.519997    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:16:19.520529    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:16:19.563042    4157 logs.go:276] 2 containers: [ebaf0ab8ed6e 728052cc7045]
	I0826 04:16:19.563188    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:16:19.587293    4157 logs.go:276] 2 containers: [9bff8c79fce6 edd62acc2f9e]
	I0826 04:16:19.587389    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:16:19.601485    4157 logs.go:276] 2 containers: [1238564fbc88 cb531344e36e]
	I0826 04:16:19.601564    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:16:19.617420    4157 logs.go:276] 2 containers: [48dfaf968d22 c9ca783bb30a]
	I0826 04:16:19.617492    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:16:19.627980    4157 logs.go:276] 2 containers: [bb14b3493df5 c78986548653]
	I0826 04:16:19.628046    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:16:19.643102    4157 logs.go:276] 2 containers: [7a6cc2a39c7e 248906c4a556]
	I0826 04:16:19.643181    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:16:19.653978    4157 logs.go:276] 0 containers: []
	W0826 04:16:19.653988    4157 logs.go:278] No container was found matching "kindnet"
	I0826 04:16:19.654053    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:16:19.664948    4157 logs.go:276] 2 containers: [a5fe322a216b d3eeaec7b527]
	I0826 04:16:19.664965    4157 logs.go:123] Gathering logs for kube-proxy [c78986548653] ...
	I0826 04:16:19.664971    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c78986548653"
	I0826 04:16:19.676857    4157 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:16:19.676871    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:16:19.712449    4157 logs.go:123] Gathering logs for etcd [edd62acc2f9e] ...
	I0826 04:16:19.712460    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edd62acc2f9e"
	I0826 04:16:19.726783    4157 logs.go:123] Gathering logs for kube-scheduler [c9ca783bb30a] ...
	I0826 04:16:19.726797    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9ca783bb30a"
	I0826 04:16:19.741597    4157 logs.go:123] Gathering logs for kube-controller-manager [7a6cc2a39c7e] ...
	I0826 04:16:19.741608    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6cc2a39c7e"
	I0826 04:16:19.760185    4157 logs.go:123] Gathering logs for kube-controller-manager [248906c4a556] ...
	I0826 04:16:19.760198    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248906c4a556"
	I0826 04:16:19.776462    4157 logs.go:123] Gathering logs for storage-provisioner [a5fe322a216b] ...
	I0826 04:16:19.776476    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5fe322a216b"
	I0826 04:16:19.789246    4157 logs.go:123] Gathering logs for container status ...
	I0826 04:16:19.789259    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:16:19.804634    4157 logs.go:123] Gathering logs for dmesg ...
	I0826 04:16:19.804647    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:16:19.809168    4157 logs.go:123] Gathering logs for kube-apiserver [ebaf0ab8ed6e] ...
	I0826 04:16:19.809180    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebaf0ab8ed6e"
	I0826 04:16:19.824951    4157 logs.go:123] Gathering logs for coredns [1238564fbc88] ...
	I0826 04:16:19.824964    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1238564fbc88"
	I0826 04:16:19.837050    4157 logs.go:123] Gathering logs for kube-scheduler [48dfaf968d22] ...
	I0826 04:16:19.837064    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48dfaf968d22"
	I0826 04:16:19.851014    4157 logs.go:123] Gathering logs for kube-proxy [bb14b3493df5] ...
	I0826 04:16:19.851026    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb14b3493df5"
	I0826 04:16:19.863456    4157 logs.go:123] Gathering logs for storage-provisioner [d3eeaec7b527] ...
	I0826 04:16:19.863470    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eeaec7b527"
	I0826 04:16:19.879311    4157 logs.go:123] Gathering logs for kubelet ...
	I0826 04:16:19.879323    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:16:19.923344    4157 logs.go:123] Gathering logs for etcd [9bff8c79fce6] ...
	I0826 04:16:19.923355    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bff8c79fce6"
	I0826 04:16:19.938159    4157 logs.go:123] Gathering logs for coredns [cb531344e36e] ...
	I0826 04:16:19.938172    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb531344e36e"
	I0826 04:16:19.949512    4157 logs.go:123] Gathering logs for Docker ...
	I0826 04:16:19.949523    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:16:19.975648    4157 logs.go:123] Gathering logs for kube-apiserver [728052cc7045] ...
	I0826 04:16:19.975660    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 728052cc7045"
	I0826 04:16:22.521140    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:16:27.522360    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:16:27.522780    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:16:27.564678    4157 logs.go:276] 2 containers: [ebaf0ab8ed6e 728052cc7045]
	I0826 04:16:27.564819    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:16:27.585817    4157 logs.go:276] 2 containers: [9bff8c79fce6 edd62acc2f9e]
	I0826 04:16:27.585922    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:16:27.603787    4157 logs.go:276] 2 containers: [1238564fbc88 cb531344e36e]
	I0826 04:16:27.603870    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:16:27.615678    4157 logs.go:276] 2 containers: [48dfaf968d22 c9ca783bb30a]
	I0826 04:16:27.615742    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:16:27.629572    4157 logs.go:276] 2 containers: [bb14b3493df5 c78986548653]
	I0826 04:16:27.629645    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:16:27.640026    4157 logs.go:276] 2 containers: [7a6cc2a39c7e 248906c4a556]
	I0826 04:16:27.640097    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:16:27.651082    4157 logs.go:276] 0 containers: []
	W0826 04:16:27.651093    4157 logs.go:278] No container was found matching "kindnet"
	I0826 04:16:27.651147    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:16:27.662105    4157 logs.go:276] 2 containers: [a5fe322a216b d3eeaec7b527]
	I0826 04:16:27.662119    4157 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:16:27.662124    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:16:27.701649    4157 logs.go:123] Gathering logs for kube-apiserver [728052cc7045] ...
	I0826 04:16:27.701664    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 728052cc7045"
	I0826 04:16:27.766131    4157 logs.go:123] Gathering logs for etcd [9bff8c79fce6] ...
	I0826 04:16:27.766144    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bff8c79fce6"
	I0826 04:16:27.781351    4157 logs.go:123] Gathering logs for coredns [1238564fbc88] ...
	I0826 04:16:27.781363    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1238564fbc88"
	I0826 04:16:27.797485    4157 logs.go:123] Gathering logs for kube-scheduler [48dfaf968d22] ...
	I0826 04:16:27.797498    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48dfaf968d22"
	I0826 04:16:27.814498    4157 logs.go:123] Gathering logs for storage-provisioner [d3eeaec7b527] ...
	I0826 04:16:27.814510    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eeaec7b527"
	I0826 04:16:27.827150    4157 logs.go:123] Gathering logs for container status ...
	I0826 04:16:27.827161    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:16:27.840266    4157 logs.go:123] Gathering logs for etcd [edd62acc2f9e] ...
	I0826 04:16:27.840280    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edd62acc2f9e"
	I0826 04:16:27.856074    4157 logs.go:123] Gathering logs for kube-controller-manager [248906c4a556] ...
	I0826 04:16:27.856086    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248906c4a556"
	I0826 04:16:27.872099    4157 logs.go:123] Gathering logs for Docker ...
	I0826 04:16:27.872112    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:16:27.898148    4157 logs.go:123] Gathering logs for kube-scheduler [c9ca783bb30a] ...
	I0826 04:16:27.898156    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9ca783bb30a"
	I0826 04:16:27.917160    4157 logs.go:123] Gathering logs for kube-proxy [c78986548653] ...
	I0826 04:16:27.917170    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c78986548653"
	I0826 04:16:27.931037    4157 logs.go:123] Gathering logs for kube-controller-manager [7a6cc2a39c7e] ...
	I0826 04:16:27.931049    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6cc2a39c7e"
	I0826 04:16:27.948688    4157 logs.go:123] Gathering logs for storage-provisioner [a5fe322a216b] ...
	I0826 04:16:27.948697    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5fe322a216b"
	I0826 04:16:27.966869    4157 logs.go:123] Gathering logs for kubelet ...
	I0826 04:16:27.966881    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:16:28.008587    4157 logs.go:123] Gathering logs for dmesg ...
	I0826 04:16:28.008607    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:16:28.013421    4157 logs.go:123] Gathering logs for kube-apiserver [ebaf0ab8ed6e] ...
	I0826 04:16:28.013437    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebaf0ab8ed6e"
	I0826 04:16:28.031935    4157 logs.go:123] Gathering logs for coredns [cb531344e36e] ...
	I0826 04:16:28.031951    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb531344e36e"
	I0826 04:16:28.044608    4157 logs.go:123] Gathering logs for kube-proxy [bb14b3493df5] ...
	I0826 04:16:28.044621    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb14b3493df5"
	I0826 04:16:30.559481    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:16:35.562251    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:16:35.562703    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:16:35.597349    4157 logs.go:276] 2 containers: [ebaf0ab8ed6e 728052cc7045]
	I0826 04:16:35.597488    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:16:35.618285    4157 logs.go:276] 2 containers: [9bff8c79fce6 edd62acc2f9e]
	I0826 04:16:35.618382    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:16:35.633897    4157 logs.go:276] 2 containers: [1238564fbc88 cb531344e36e]
	I0826 04:16:35.633981    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:16:35.650447    4157 logs.go:276] 2 containers: [48dfaf968d22 c9ca783bb30a]
	I0826 04:16:35.650517    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:16:35.662646    4157 logs.go:276] 2 containers: [bb14b3493df5 c78986548653]
	I0826 04:16:35.662731    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:16:35.674454    4157 logs.go:276] 2 containers: [7a6cc2a39c7e 248906c4a556]
	I0826 04:16:35.674534    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:16:35.685795    4157 logs.go:276] 0 containers: []
	W0826 04:16:35.685807    4157 logs.go:278] No container was found matching "kindnet"
	I0826 04:16:35.685866    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:16:35.697373    4157 logs.go:276] 2 containers: [a5fe322a216b d3eeaec7b527]
	I0826 04:16:35.697391    4157 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:16:35.697398    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:16:35.735787    4157 logs.go:123] Gathering logs for kube-apiserver [728052cc7045] ...
	I0826 04:16:35.735804    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 728052cc7045"
	I0826 04:16:35.776964    4157 logs.go:123] Gathering logs for kube-proxy [c78986548653] ...
	I0826 04:16:35.776977    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c78986548653"
	I0826 04:16:35.789888    4157 logs.go:123] Gathering logs for storage-provisioner [a5fe322a216b] ...
	I0826 04:16:35.789901    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5fe322a216b"
	I0826 04:16:35.805267    4157 logs.go:123] Gathering logs for Docker ...
	I0826 04:16:35.805279    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:16:35.832201    4157 logs.go:123] Gathering logs for kubelet ...
	I0826 04:16:35.832215    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:16:35.875679    4157 logs.go:123] Gathering logs for kube-apiserver [ebaf0ab8ed6e] ...
	I0826 04:16:35.875692    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebaf0ab8ed6e"
	I0826 04:16:35.890887    4157 logs.go:123] Gathering logs for coredns [cb531344e36e] ...
	I0826 04:16:35.890906    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb531344e36e"
	I0826 04:16:35.905604    4157 logs.go:123] Gathering logs for kube-controller-manager [248906c4a556] ...
	I0826 04:16:35.905617    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248906c4a556"
	I0826 04:16:35.921556    4157 logs.go:123] Gathering logs for storage-provisioner [d3eeaec7b527] ...
	I0826 04:16:35.921569    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eeaec7b527"
	I0826 04:16:35.934597    4157 logs.go:123] Gathering logs for dmesg ...
	I0826 04:16:35.934614    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:16:35.940012    4157 logs.go:123] Gathering logs for coredns [1238564fbc88] ...
	I0826 04:16:35.940024    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1238564fbc88"
	I0826 04:16:35.954214    4157 logs.go:123] Gathering logs for kube-scheduler [48dfaf968d22] ...
	I0826 04:16:35.954227    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48dfaf968d22"
	I0826 04:16:35.967416    4157 logs.go:123] Gathering logs for kube-proxy [bb14b3493df5] ...
	I0826 04:16:35.967427    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb14b3493df5"
	I0826 04:16:35.979988    4157 logs.go:123] Gathering logs for kube-controller-manager [7a6cc2a39c7e] ...
	I0826 04:16:35.979998    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6cc2a39c7e"
	I0826 04:16:35.998807    4157 logs.go:123] Gathering logs for container status ...
	I0826 04:16:35.998818    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:16:36.020358    4157 logs.go:123] Gathering logs for etcd [edd62acc2f9e] ...
	I0826 04:16:36.020369    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edd62acc2f9e"
	I0826 04:16:36.037969    4157 logs.go:123] Gathering logs for kube-scheduler [c9ca783bb30a] ...
	I0826 04:16:36.037980    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9ca783bb30a"
	I0826 04:16:36.052959    4157 logs.go:123] Gathering logs for etcd [9bff8c79fce6] ...
	I0826 04:16:36.052970    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bff8c79fce6"
	I0826 04:16:38.566973    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:16:43.567527    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:16:43.567787    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:16:43.595283    4157 logs.go:276] 2 containers: [ebaf0ab8ed6e 728052cc7045]
	I0826 04:16:43.595390    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:16:43.615383    4157 logs.go:276] 2 containers: [9bff8c79fce6 edd62acc2f9e]
	I0826 04:16:43.615472    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:16:43.631681    4157 logs.go:276] 2 containers: [1238564fbc88 cb531344e36e]
	I0826 04:16:43.631759    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:16:43.644097    4157 logs.go:276] 2 containers: [48dfaf968d22 c9ca783bb30a]
	I0826 04:16:43.644167    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:16:43.656091    4157 logs.go:276] 2 containers: [bb14b3493df5 c78986548653]
	I0826 04:16:43.656165    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:16:43.667604    4157 logs.go:276] 2 containers: [7a6cc2a39c7e 248906c4a556]
	I0826 04:16:43.667714    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:16:43.679418    4157 logs.go:276] 0 containers: []
	W0826 04:16:43.679431    4157 logs.go:278] No container was found matching "kindnet"
	I0826 04:16:43.679493    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:16:43.691458    4157 logs.go:276] 2 containers: [a5fe322a216b d3eeaec7b527]
	I0826 04:16:43.691476    4157 logs.go:123] Gathering logs for kube-scheduler [c9ca783bb30a] ...
	I0826 04:16:43.691482    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9ca783bb30a"
	I0826 04:16:43.709304    4157 logs.go:123] Gathering logs for kube-proxy [bb14b3493df5] ...
	I0826 04:16:43.709315    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb14b3493df5"
	I0826 04:16:43.721851    4157 logs.go:123] Gathering logs for storage-provisioner [a5fe322a216b] ...
	I0826 04:16:43.721861    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5fe322a216b"
	I0826 04:16:43.733992    4157 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:16:43.734019    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:16:43.775695    4157 logs.go:123] Gathering logs for kube-apiserver [ebaf0ab8ed6e] ...
	I0826 04:16:43.775707    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebaf0ab8ed6e"
	I0826 04:16:43.790650    4157 logs.go:123] Gathering logs for etcd [edd62acc2f9e] ...
	I0826 04:16:43.790662    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edd62acc2f9e"
	I0826 04:16:43.808102    4157 logs.go:123] Gathering logs for coredns [cb531344e36e] ...
	I0826 04:16:43.808114    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb531344e36e"
	I0826 04:16:43.820223    4157 logs.go:123] Gathering logs for storage-provisioner [d3eeaec7b527] ...
	I0826 04:16:43.820232    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eeaec7b527"
	I0826 04:16:43.834880    4157 logs.go:123] Gathering logs for container status ...
	I0826 04:16:43.834888    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:16:43.854918    4157 logs.go:123] Gathering logs for coredns [1238564fbc88] ...
	I0826 04:16:43.854930    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1238564fbc88"
	I0826 04:16:43.866924    4157 logs.go:123] Gathering logs for kube-scheduler [48dfaf968d22] ...
	I0826 04:16:43.866936    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48dfaf968d22"
	I0826 04:16:43.886540    4157 logs.go:123] Gathering logs for kube-proxy [c78986548653] ...
	I0826 04:16:43.886549    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c78986548653"
	I0826 04:16:43.898789    4157 logs.go:123] Gathering logs for kube-controller-manager [7a6cc2a39c7e] ...
	I0826 04:16:43.898802    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6cc2a39c7e"
	I0826 04:16:43.916883    4157 logs.go:123] Gathering logs for kubelet ...
	I0826 04:16:43.916894    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:16:43.960875    4157 logs.go:123] Gathering logs for etcd [9bff8c79fce6] ...
	I0826 04:16:43.960891    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bff8c79fce6"
	I0826 04:16:43.976097    4157 logs.go:123] Gathering logs for Docker ...
	I0826 04:16:43.976112    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:16:44.001764    4157 logs.go:123] Gathering logs for dmesg ...
	I0826 04:16:44.001787    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:16:44.006842    4157 logs.go:123] Gathering logs for kube-apiserver [728052cc7045] ...
	I0826 04:16:44.006852    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 728052cc7045"
	I0826 04:16:44.046516    4157 logs.go:123] Gathering logs for kube-controller-manager [248906c4a556] ...
	I0826 04:16:44.046527    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248906c4a556"
	I0826 04:16:46.570454    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:16:51.572928    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:16:51.573071    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:16:51.593148    4157 logs.go:276] 2 containers: [ebaf0ab8ed6e 728052cc7045]
	I0826 04:16:51.593244    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:16:51.612857    4157 logs.go:276] 2 containers: [9bff8c79fce6 edd62acc2f9e]
	I0826 04:16:51.612932    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:16:51.625255    4157 logs.go:276] 2 containers: [1238564fbc88 cb531344e36e]
	I0826 04:16:51.625335    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:16:51.636754    4157 logs.go:276] 2 containers: [48dfaf968d22 c9ca783bb30a]
	I0826 04:16:51.636829    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:16:51.648293    4157 logs.go:276] 2 containers: [bb14b3493df5 c78986548653]
	I0826 04:16:51.648364    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:16:51.660202    4157 logs.go:276] 2 containers: [7a6cc2a39c7e 248906c4a556]
	I0826 04:16:51.660271    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:16:51.671147    4157 logs.go:276] 0 containers: []
	W0826 04:16:51.671158    4157 logs.go:278] No container was found matching "kindnet"
	I0826 04:16:51.671218    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:16:51.683461    4157 logs.go:276] 2 containers: [a5fe322a216b d3eeaec7b527]
	I0826 04:16:51.683474    4157 logs.go:123] Gathering logs for storage-provisioner [d3eeaec7b527] ...
	I0826 04:16:51.683479    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eeaec7b527"
	I0826 04:16:51.696233    4157 logs.go:123] Gathering logs for etcd [9bff8c79fce6] ...
	I0826 04:16:51.696243    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bff8c79fce6"
	I0826 04:16:51.715101    4157 logs.go:123] Gathering logs for kube-apiserver [ebaf0ab8ed6e] ...
	I0826 04:16:51.715113    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebaf0ab8ed6e"
	I0826 04:16:51.730332    4157 logs.go:123] Gathering logs for etcd [edd62acc2f9e] ...
	I0826 04:16:51.730343    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edd62acc2f9e"
	I0826 04:16:51.745731    4157 logs.go:123] Gathering logs for kube-scheduler [48dfaf968d22] ...
	I0826 04:16:51.745739    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48dfaf968d22"
	I0826 04:16:51.759113    4157 logs.go:123] Gathering logs for kube-proxy [bb14b3493df5] ...
	I0826 04:16:51.759122    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb14b3493df5"
	I0826 04:16:51.772255    4157 logs.go:123] Gathering logs for storage-provisioner [a5fe322a216b] ...
	I0826 04:16:51.772268    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5fe322a216b"
	I0826 04:16:51.785159    4157 logs.go:123] Gathering logs for Docker ...
	I0826 04:16:51.785170    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:16:51.811071    4157 logs.go:123] Gathering logs for kubelet ...
	I0826 04:16:51.811086    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:16:51.854293    4157 logs.go:123] Gathering logs for kube-apiserver [728052cc7045] ...
	I0826 04:16:51.854312    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 728052cc7045"
	I0826 04:16:51.893605    4157 logs.go:123] Gathering logs for coredns [1238564fbc88] ...
	I0826 04:16:51.893614    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1238564fbc88"
	I0826 04:16:51.906002    4157 logs.go:123] Gathering logs for coredns [cb531344e36e] ...
	I0826 04:16:51.906014    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb531344e36e"
	I0826 04:16:51.925571    4157 logs.go:123] Gathering logs for kube-controller-manager [248906c4a556] ...
	I0826 04:16:51.925586    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248906c4a556"
	I0826 04:16:51.955453    4157 logs.go:123] Gathering logs for dmesg ...
	I0826 04:16:51.955478    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:16:51.960822    4157 logs.go:123] Gathering logs for kube-scheduler [c9ca783bb30a] ...
	I0826 04:16:51.960838    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9ca783bb30a"
	I0826 04:16:51.995781    4157 logs.go:123] Gathering logs for kube-proxy [c78986548653] ...
	I0826 04:16:51.995791    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c78986548653"
	I0826 04:16:52.023178    4157 logs.go:123] Gathering logs for kube-controller-manager [7a6cc2a39c7e] ...
	I0826 04:16:52.023190    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6cc2a39c7e"
	I0826 04:16:52.052560    4157 logs.go:123] Gathering logs for container status ...
	I0826 04:16:52.052577    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:16:52.067134    4157 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:16:52.067146    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:16:54.605519    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:16:59.607644    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:16:59.607791    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:16:59.620555    4157 logs.go:276] 2 containers: [ebaf0ab8ed6e 728052cc7045]
	I0826 04:16:59.620636    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:16:59.639263    4157 logs.go:276] 2 containers: [9bff8c79fce6 edd62acc2f9e]
	I0826 04:16:59.639333    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:16:59.651214    4157 logs.go:276] 2 containers: [1238564fbc88 cb531344e36e]
	I0826 04:16:59.651289    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:16:59.662642    4157 logs.go:276] 2 containers: [48dfaf968d22 c9ca783bb30a]
	I0826 04:16:59.662714    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:16:59.673930    4157 logs.go:276] 2 containers: [bb14b3493df5 c78986548653]
	I0826 04:16:59.674002    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:16:59.685078    4157 logs.go:276] 2 containers: [7a6cc2a39c7e 248906c4a556]
	I0826 04:16:59.685151    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:16:59.695779    4157 logs.go:276] 0 containers: []
	W0826 04:16:59.695793    4157 logs.go:278] No container was found matching "kindnet"
	I0826 04:16:59.695857    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:16:59.707384    4157 logs.go:276] 2 containers: [a5fe322a216b d3eeaec7b527]
	I0826 04:16:59.707400    4157 logs.go:123] Gathering logs for etcd [edd62acc2f9e] ...
	I0826 04:16:59.707404    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edd62acc2f9e"
	I0826 04:16:59.722338    4157 logs.go:123] Gathering logs for kube-controller-manager [7a6cc2a39c7e] ...
	I0826 04:16:59.722348    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6cc2a39c7e"
	I0826 04:16:59.740350    4157 logs.go:123] Gathering logs for storage-provisioner [a5fe322a216b] ...
	I0826 04:16:59.740363    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5fe322a216b"
	I0826 04:16:59.752660    4157 logs.go:123] Gathering logs for kubelet ...
	I0826 04:16:59.752674    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:16:59.796503    4157 logs.go:123] Gathering logs for kube-apiserver [ebaf0ab8ed6e] ...
	I0826 04:16:59.796522    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebaf0ab8ed6e"
	I0826 04:16:59.811452    4157 logs.go:123] Gathering logs for kube-apiserver [728052cc7045] ...
	I0826 04:16:59.811464    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 728052cc7045"
	I0826 04:16:59.851441    4157 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:16:59.851457    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:16:59.888402    4157 logs.go:123] Gathering logs for etcd [9bff8c79fce6] ...
	I0826 04:16:59.888415    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bff8c79fce6"
	I0826 04:16:59.903481    4157 logs.go:123] Gathering logs for kube-controller-manager [248906c4a556] ...
	I0826 04:16:59.903494    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248906c4a556"
	I0826 04:16:59.919634    4157 logs.go:123] Gathering logs for kube-proxy [bb14b3493df5] ...
	I0826 04:16:59.919647    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb14b3493df5"
	I0826 04:16:59.932432    4157 logs.go:123] Gathering logs for Docker ...
	I0826 04:16:59.932446    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:16:59.958333    4157 logs.go:123] Gathering logs for dmesg ...
	I0826 04:16:59.958350    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:16:59.963629    4157 logs.go:123] Gathering logs for coredns [1238564fbc88] ...
	I0826 04:16:59.963638    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1238564fbc88"
	I0826 04:16:59.975933    4157 logs.go:123] Gathering logs for kube-scheduler [c9ca783bb30a] ...
	I0826 04:16:59.975945    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9ca783bb30a"
	I0826 04:16:59.993916    4157 logs.go:123] Gathering logs for storage-provisioner [d3eeaec7b527] ...
	I0826 04:16:59.993930    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eeaec7b527"
	I0826 04:17:00.005533    4157 logs.go:123] Gathering logs for container status ...
	I0826 04:17:00.005544    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:17:00.018072    4157 logs.go:123] Gathering logs for coredns [cb531344e36e] ...
	I0826 04:17:00.018083    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb531344e36e"
	I0826 04:17:00.029692    4157 logs.go:123] Gathering logs for kube-scheduler [48dfaf968d22] ...
	I0826 04:17:00.029706    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48dfaf968d22"
	I0826 04:17:00.041059    4157 logs.go:123] Gathering logs for kube-proxy [c78986548653] ...
	I0826 04:17:00.041071    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c78986548653"
	I0826 04:17:02.554735    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:17:07.554981    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:17:07.555052    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:17:07.566527    4157 logs.go:276] 2 containers: [ebaf0ab8ed6e 728052cc7045]
	I0826 04:17:07.566607    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:17:07.578599    4157 logs.go:276] 2 containers: [9bff8c79fce6 edd62acc2f9e]
	I0826 04:17:07.578678    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:17:07.589925    4157 logs.go:276] 2 containers: [1238564fbc88 cb531344e36e]
	I0826 04:17:07.589991    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:17:07.606113    4157 logs.go:276] 2 containers: [48dfaf968d22 c9ca783bb30a]
	I0826 04:17:07.606193    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:17:07.617574    4157 logs.go:276] 2 containers: [bb14b3493df5 c78986548653]
	I0826 04:17:07.617646    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:17:07.628544    4157 logs.go:276] 2 containers: [7a6cc2a39c7e 248906c4a556]
	I0826 04:17:07.628616    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:17:07.640577    4157 logs.go:276] 0 containers: []
	W0826 04:17:07.640589    4157 logs.go:278] No container was found matching "kindnet"
	I0826 04:17:07.640648    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:17:07.651122    4157 logs.go:276] 2 containers: [a5fe322a216b d3eeaec7b527]
	I0826 04:17:07.651140    4157 logs.go:123] Gathering logs for kube-controller-manager [7a6cc2a39c7e] ...
	I0826 04:17:07.651145    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6cc2a39c7e"
	I0826 04:17:07.669438    4157 logs.go:123] Gathering logs for storage-provisioner [a5fe322a216b] ...
	I0826 04:17:07.669446    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5fe322a216b"
	I0826 04:17:07.681669    4157 logs.go:123] Gathering logs for container status ...
	I0826 04:17:07.681681    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:17:07.695470    4157 logs.go:123] Gathering logs for dmesg ...
	I0826 04:17:07.695481    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:17:07.700455    4157 logs.go:123] Gathering logs for kube-apiserver [ebaf0ab8ed6e] ...
	I0826 04:17:07.700468    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebaf0ab8ed6e"
	I0826 04:17:07.714970    4157 logs.go:123] Gathering logs for coredns [cb531344e36e] ...
	I0826 04:17:07.714983    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb531344e36e"
	I0826 04:17:07.727450    4157 logs.go:123] Gathering logs for kube-scheduler [c9ca783bb30a] ...
	I0826 04:17:07.727458    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9ca783bb30a"
	I0826 04:17:07.743410    4157 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:17:07.743421    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:17:07.781530    4157 logs.go:123] Gathering logs for kube-scheduler [48dfaf968d22] ...
	I0826 04:17:07.781542    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48dfaf968d22"
	I0826 04:17:07.796961    4157 logs.go:123] Gathering logs for etcd [edd62acc2f9e] ...
	I0826 04:17:07.796974    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edd62acc2f9e"
	I0826 04:17:07.812677    4157 logs.go:123] Gathering logs for kube-proxy [c78986548653] ...
	I0826 04:17:07.812689    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c78986548653"
	I0826 04:17:07.826555    4157 logs.go:123] Gathering logs for kube-proxy [bb14b3493df5] ...
	I0826 04:17:07.826570    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb14b3493df5"
	I0826 04:17:07.838779    4157 logs.go:123] Gathering logs for kube-controller-manager [248906c4a556] ...
	I0826 04:17:07.838790    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248906c4a556"
	I0826 04:17:07.853848    4157 logs.go:123] Gathering logs for storage-provisioner [d3eeaec7b527] ...
	I0826 04:17:07.853860    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eeaec7b527"
	I0826 04:17:07.866830    4157 logs.go:123] Gathering logs for Docker ...
	I0826 04:17:07.866843    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:17:07.891779    4157 logs.go:123] Gathering logs for kubelet ...
	I0826 04:17:07.891788    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:17:07.934006    4157 logs.go:123] Gathering logs for kube-apiserver [728052cc7045] ...
	I0826 04:17:07.934022    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 728052cc7045"
	I0826 04:17:07.972003    4157 logs.go:123] Gathering logs for etcd [9bff8c79fce6] ...
	I0826 04:17:07.972013    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bff8c79fce6"
	I0826 04:17:07.986126    4157 logs.go:123] Gathering logs for coredns [1238564fbc88] ...
	I0826 04:17:07.986140    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1238564fbc88"
	I0826 04:17:10.499249    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:17:15.501386    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:17:15.501546    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:17:15.517527    4157 logs.go:276] 2 containers: [ebaf0ab8ed6e 728052cc7045]
	I0826 04:17:15.517564    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:17:15.531123    4157 logs.go:276] 2 containers: [9bff8c79fce6 edd62acc2f9e]
	I0826 04:17:15.531162    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:17:15.542485    4157 logs.go:276] 2 containers: [1238564fbc88 cb531344e36e]
	I0826 04:17:15.542521    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:17:15.554137    4157 logs.go:276] 2 containers: [48dfaf968d22 c9ca783bb30a]
	I0826 04:17:15.554213    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:17:15.570615    4157 logs.go:276] 2 containers: [bb14b3493df5 c78986548653]
	I0826 04:17:15.570664    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:17:15.583591    4157 logs.go:276] 2 containers: [7a6cc2a39c7e 248906c4a556]
	I0826 04:17:15.583661    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:17:15.594811    4157 logs.go:276] 0 containers: []
	W0826 04:17:15.594823    4157 logs.go:278] No container was found matching "kindnet"
	I0826 04:17:15.594882    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:17:15.606653    4157 logs.go:276] 2 containers: [a5fe322a216b d3eeaec7b527]
	I0826 04:17:15.606670    4157 logs.go:123] Gathering logs for kube-apiserver [728052cc7045] ...
	I0826 04:17:15.606676    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 728052cc7045"
	I0826 04:17:15.647104    4157 logs.go:123] Gathering logs for kube-scheduler [48dfaf968d22] ...
	I0826 04:17:15.647115    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48dfaf968d22"
	I0826 04:17:15.663470    4157 logs.go:123] Gathering logs for storage-provisioner [a5fe322a216b] ...
	I0826 04:17:15.663482    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5fe322a216b"
	I0826 04:17:15.675731    4157 logs.go:123] Gathering logs for kubelet ...
	I0826 04:17:15.675745    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:17:15.719458    4157 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:17:15.719468    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:17:15.756384    4157 logs.go:123] Gathering logs for coredns [1238564fbc88] ...
	I0826 04:17:15.756425    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1238564fbc88"
	I0826 04:17:15.768912    4157 logs.go:123] Gathering logs for kube-scheduler [c9ca783bb30a] ...
	I0826 04:17:15.768924    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9ca783bb30a"
	I0826 04:17:15.784580    4157 logs.go:123] Gathering logs for kube-proxy [bb14b3493df5] ...
	I0826 04:17:15.784593    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb14b3493df5"
	I0826 04:17:15.797788    4157 logs.go:123] Gathering logs for kube-proxy [c78986548653] ...
	I0826 04:17:15.797799    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c78986548653"
	I0826 04:17:15.813419    4157 logs.go:123] Gathering logs for dmesg ...
	I0826 04:17:15.813431    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:17:15.818347    4157 logs.go:123] Gathering logs for etcd [9bff8c79fce6] ...
	I0826 04:17:15.818356    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bff8c79fce6"
	I0826 04:17:15.834590    4157 logs.go:123] Gathering logs for coredns [cb531344e36e] ...
	I0826 04:17:15.834603    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb531344e36e"
	I0826 04:17:15.847551    4157 logs.go:123] Gathering logs for storage-provisioner [d3eeaec7b527] ...
	I0826 04:17:15.847566    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eeaec7b527"
	I0826 04:17:15.860795    4157 logs.go:123] Gathering logs for kube-apiserver [ebaf0ab8ed6e] ...
	I0826 04:17:15.860812    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebaf0ab8ed6e"
	I0826 04:17:15.875779    4157 logs.go:123] Gathering logs for kube-controller-manager [7a6cc2a39c7e] ...
	I0826 04:17:15.875790    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6cc2a39c7e"
	I0826 04:17:15.894291    4157 logs.go:123] Gathering logs for kube-controller-manager [248906c4a556] ...
	I0826 04:17:15.894302    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248906c4a556"
	I0826 04:17:15.909612    4157 logs.go:123] Gathering logs for Docker ...
	I0826 04:17:15.909623    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:17:15.932935    4157 logs.go:123] Gathering logs for container status ...
	I0826 04:17:15.932942    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:17:15.944952    4157 logs.go:123] Gathering logs for etcd [edd62acc2f9e] ...
	I0826 04:17:15.944965    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edd62acc2f9e"
	I0826 04:17:18.461811    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:17:23.462709    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:17:23.462791    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:17:23.474249    4157 logs.go:276] 2 containers: [ebaf0ab8ed6e 728052cc7045]
	I0826 04:17:23.474294    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:17:23.489666    4157 logs.go:276] 2 containers: [9bff8c79fce6 edd62acc2f9e]
	I0826 04:17:23.489723    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:17:23.501102    4157 logs.go:276] 2 containers: [1238564fbc88 cb531344e36e]
	I0826 04:17:23.501174    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:17:23.512146    4157 logs.go:276] 2 containers: [48dfaf968d22 c9ca783bb30a]
	I0826 04:17:23.512217    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:17:23.523486    4157 logs.go:276] 2 containers: [bb14b3493df5 c78986548653]
	I0826 04:17:23.523554    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:17:23.534753    4157 logs.go:276] 2 containers: [7a6cc2a39c7e 248906c4a556]
	I0826 04:17:23.534826    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:17:23.545982    4157 logs.go:276] 0 containers: []
	W0826 04:17:23.545995    4157 logs.go:278] No container was found matching "kindnet"
	I0826 04:17:23.546061    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:17:23.562539    4157 logs.go:276] 2 containers: [a5fe322a216b d3eeaec7b527]
	I0826 04:17:23.562556    4157 logs.go:123] Gathering logs for storage-provisioner [d3eeaec7b527] ...
	I0826 04:17:23.562563    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eeaec7b527"
	I0826 04:17:23.575595    4157 logs.go:123] Gathering logs for dmesg ...
	I0826 04:17:23.575608    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:17:23.580269    4157 logs.go:123] Gathering logs for kube-apiserver [728052cc7045] ...
	I0826 04:17:23.580278    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 728052cc7045"
	I0826 04:17:23.619919    4157 logs.go:123] Gathering logs for etcd [edd62acc2f9e] ...
	I0826 04:17:23.619927    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edd62acc2f9e"
	I0826 04:17:23.634881    4157 logs.go:123] Gathering logs for kube-scheduler [48dfaf968d22] ...
	I0826 04:17:23.634891    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48dfaf968d22"
	I0826 04:17:23.647203    4157 logs.go:123] Gathering logs for kube-proxy [c78986548653] ...
	I0826 04:17:23.647216    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c78986548653"
	I0826 04:17:23.665902    4157 logs.go:123] Gathering logs for etcd [9bff8c79fce6] ...
	I0826 04:17:23.665911    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bff8c79fce6"
	I0826 04:17:23.680759    4157 logs.go:123] Gathering logs for Docker ...
	I0826 04:17:23.680773    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:17:23.705980    4157 logs.go:123] Gathering logs for container status ...
	I0826 04:17:23.705996    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:17:23.719458    4157 logs.go:123] Gathering logs for kube-apiserver [ebaf0ab8ed6e] ...
	I0826 04:17:23.719472    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebaf0ab8ed6e"
	I0826 04:17:23.734324    4157 logs.go:123] Gathering logs for coredns [1238564fbc88] ...
	I0826 04:17:23.734340    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1238564fbc88"
	I0826 04:17:23.745984    4157 logs.go:123] Gathering logs for kube-proxy [bb14b3493df5] ...
	I0826 04:17:23.745997    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb14b3493df5"
	I0826 04:17:23.760108    4157 logs.go:123] Gathering logs for kube-controller-manager [7a6cc2a39c7e] ...
	I0826 04:17:23.760123    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6cc2a39c7e"
	I0826 04:17:23.779239    4157 logs.go:123] Gathering logs for kube-controller-manager [248906c4a556] ...
	I0826 04:17:23.779249    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248906c4a556"
	I0826 04:17:23.800231    4157 logs.go:123] Gathering logs for kubelet ...
	I0826 04:17:23.800242    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:17:23.843513    4157 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:17:23.843527    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:17:23.882913    4157 logs.go:123] Gathering logs for coredns [cb531344e36e] ...
	I0826 04:17:23.882924    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb531344e36e"
	I0826 04:17:23.894955    4157 logs.go:123] Gathering logs for kube-scheduler [c9ca783bb30a] ...
	I0826 04:17:23.894966    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9ca783bb30a"
	I0826 04:17:23.909731    4157 logs.go:123] Gathering logs for storage-provisioner [a5fe322a216b] ...
	I0826 04:17:23.909741    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5fe322a216b"
	I0826 04:17:26.424760    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:17:31.426978    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:17:31.427037    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:17:31.438969    4157 logs.go:276] 2 containers: [ebaf0ab8ed6e 728052cc7045]
	I0826 04:17:31.439040    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:17:31.452399    4157 logs.go:276] 2 containers: [9bff8c79fce6 edd62acc2f9e]
	I0826 04:17:31.452464    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:17:31.464152    4157 logs.go:276] 2 containers: [1238564fbc88 cb531344e36e]
	I0826 04:17:31.464226    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:17:31.480160    4157 logs.go:276] 2 containers: [48dfaf968d22 c9ca783bb30a]
	I0826 04:17:31.480228    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:17:31.491676    4157 logs.go:276] 2 containers: [bb14b3493df5 c78986548653]
	I0826 04:17:31.491766    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:17:31.503047    4157 logs.go:276] 2 containers: [7a6cc2a39c7e 248906c4a556]
	I0826 04:17:31.503115    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:17:31.514252    4157 logs.go:276] 0 containers: []
	W0826 04:17:31.514260    4157 logs.go:278] No container was found matching "kindnet"
	I0826 04:17:31.514318    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:17:31.526201    4157 logs.go:276] 2 containers: [a5fe322a216b d3eeaec7b527]
	I0826 04:17:31.526217    4157 logs.go:123] Gathering logs for etcd [9bff8c79fce6] ...
	I0826 04:17:31.526222    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bff8c79fce6"
	I0826 04:17:31.540958    4157 logs.go:123] Gathering logs for kube-proxy [c78986548653] ...
	I0826 04:17:31.540970    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c78986548653"
	I0826 04:17:31.554202    4157 logs.go:123] Gathering logs for Docker ...
	I0826 04:17:31.554215    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:17:31.579732    4157 logs.go:123] Gathering logs for container status ...
	I0826 04:17:31.579743    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:17:31.592760    4157 logs.go:123] Gathering logs for kube-apiserver [ebaf0ab8ed6e] ...
	I0826 04:17:31.592771    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebaf0ab8ed6e"
	I0826 04:17:31.607006    4157 logs.go:123] Gathering logs for etcd [edd62acc2f9e] ...
	I0826 04:17:31.607016    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edd62acc2f9e"
	I0826 04:17:31.623445    4157 logs.go:123] Gathering logs for kube-scheduler [48dfaf968d22] ...
	I0826 04:17:31.623453    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48dfaf968d22"
	I0826 04:17:31.636248    4157 logs.go:123] Gathering logs for kube-controller-manager [7a6cc2a39c7e] ...
	I0826 04:17:31.636259    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6cc2a39c7e"
	I0826 04:17:31.654909    4157 logs.go:123] Gathering logs for kube-controller-manager [248906c4a556] ...
	I0826 04:17:31.654923    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248906c4a556"
	I0826 04:17:31.671181    4157 logs.go:123] Gathering logs for storage-provisioner [a5fe322a216b] ...
	I0826 04:17:31.671193    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5fe322a216b"
	I0826 04:17:31.684176    4157 logs.go:123] Gathering logs for storage-provisioner [d3eeaec7b527] ...
	I0826 04:17:31.684190    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eeaec7b527"
	I0826 04:17:31.697202    4157 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:17:31.697215    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:17:31.734563    4157 logs.go:123] Gathering logs for coredns [cb531344e36e] ...
	I0826 04:17:31.734577    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb531344e36e"
	I0826 04:17:31.747570    4157 logs.go:123] Gathering logs for kube-proxy [bb14b3493df5] ...
	I0826 04:17:31.747582    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb14b3493df5"
	I0826 04:17:31.759918    4157 logs.go:123] Gathering logs for kubelet ...
	I0826 04:17:31.759929    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:17:31.801379    4157 logs.go:123] Gathering logs for dmesg ...
	I0826 04:17:31.801392    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:17:31.806405    4157 logs.go:123] Gathering logs for kube-apiserver [728052cc7045] ...
	I0826 04:17:31.806415    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 728052cc7045"
	I0826 04:17:31.845170    4157 logs.go:123] Gathering logs for coredns [1238564fbc88] ...
	I0826 04:17:31.845186    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1238564fbc88"
	I0826 04:17:31.856184    4157 logs.go:123] Gathering logs for kube-scheduler [c9ca783bb30a] ...
	I0826 04:17:31.856200    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9ca783bb30a"
	I0826 04:17:34.372672    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:17:39.374777    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:17:39.374857    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:17:39.391481    4157 logs.go:276] 2 containers: [ebaf0ab8ed6e 728052cc7045]
	I0826 04:17:39.391548    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:17:39.403144    4157 logs.go:276] 2 containers: [9bff8c79fce6 edd62acc2f9e]
	I0826 04:17:39.403220    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:17:39.417529    4157 logs.go:276] 2 containers: [1238564fbc88 cb531344e36e]
	I0826 04:17:39.417590    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:17:39.430448    4157 logs.go:276] 2 containers: [48dfaf968d22 c9ca783bb30a]
	I0826 04:17:39.430523    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:17:39.441591    4157 logs.go:276] 2 containers: [bb14b3493df5 c78986548653]
	I0826 04:17:39.441661    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:17:39.452504    4157 logs.go:276] 2 containers: [7a6cc2a39c7e 248906c4a556]
	I0826 04:17:39.452576    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:17:39.463277    4157 logs.go:276] 0 containers: []
	W0826 04:17:39.463290    4157 logs.go:278] No container was found matching "kindnet"
	I0826 04:17:39.463355    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:17:39.474862    4157 logs.go:276] 2 containers: [a5fe322a216b d3eeaec7b527]
	I0826 04:17:39.474875    4157 logs.go:123] Gathering logs for etcd [edd62acc2f9e] ...
	I0826 04:17:39.474880    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edd62acc2f9e"
	I0826 04:17:39.490082    4157 logs.go:123] Gathering logs for kube-scheduler [48dfaf968d22] ...
	I0826 04:17:39.490098    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48dfaf968d22"
	I0826 04:17:39.502915    4157 logs.go:123] Gathering logs for kube-proxy [bb14b3493df5] ...
	I0826 04:17:39.502930    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb14b3493df5"
	I0826 04:17:39.515380    4157 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:17:39.515392    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:17:39.557154    4157 logs.go:123] Gathering logs for kube-apiserver [ebaf0ab8ed6e] ...
	I0826 04:17:39.557162    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebaf0ab8ed6e"
	I0826 04:17:39.571742    4157 logs.go:123] Gathering logs for etcd [9bff8c79fce6] ...
	I0826 04:17:39.571751    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bff8c79fce6"
	I0826 04:17:39.586266    4157 logs.go:123] Gathering logs for dmesg ...
	I0826 04:17:39.586277    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:17:39.591925    4157 logs.go:123] Gathering logs for coredns [1238564fbc88] ...
	I0826 04:17:39.591935    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1238564fbc88"
	I0826 04:17:39.604276    4157 logs.go:123] Gathering logs for Docker ...
	I0826 04:17:39.604287    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:17:39.629008    4157 logs.go:123] Gathering logs for coredns [cb531344e36e] ...
	I0826 04:17:39.629022    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb531344e36e"
	I0826 04:17:39.641485    4157 logs.go:123] Gathering logs for container status ...
	I0826 04:17:39.641498    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:17:39.654055    4157 logs.go:123] Gathering logs for kube-proxy [c78986548653] ...
	I0826 04:17:39.654068    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c78986548653"
	I0826 04:17:39.667186    4157 logs.go:123] Gathering logs for kube-controller-manager [7a6cc2a39c7e] ...
	I0826 04:17:39.667199    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6cc2a39c7e"
	I0826 04:17:39.685699    4157 logs.go:123] Gathering logs for kube-controller-manager [248906c4a556] ...
	I0826 04:17:39.685716    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248906c4a556"
	I0826 04:17:39.701705    4157 logs.go:123] Gathering logs for storage-provisioner [a5fe322a216b] ...
	I0826 04:17:39.701717    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5fe322a216b"
	I0826 04:17:39.713803    4157 logs.go:123] Gathering logs for storage-provisioner [d3eeaec7b527] ...
	I0826 04:17:39.713817    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eeaec7b527"
	I0826 04:17:39.726045    4157 logs.go:123] Gathering logs for kubelet ...
	I0826 04:17:39.726057    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:17:39.769169    4157 logs.go:123] Gathering logs for kube-apiserver [728052cc7045] ...
	I0826 04:17:39.769181    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 728052cc7045"
	I0826 04:17:39.810950    4157 logs.go:123] Gathering logs for kube-scheduler [c9ca783bb30a] ...
	I0826 04:17:39.810968    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9ca783bb30a"
	I0826 04:17:42.327858    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:17:47.330270    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:17:47.330411    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:17:47.346131    4157 logs.go:276] 2 containers: [ebaf0ab8ed6e 728052cc7045]
	I0826 04:17:47.346202    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:17:47.357439    4157 logs.go:276] 2 containers: [9bff8c79fce6 edd62acc2f9e]
	I0826 04:17:47.357522    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:17:47.369356    4157 logs.go:276] 2 containers: [1238564fbc88 cb531344e36e]
	I0826 04:17:47.369429    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:17:47.380911    4157 logs.go:276] 2 containers: [48dfaf968d22 c9ca783bb30a]
	I0826 04:17:47.380980    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:17:47.392315    4157 logs.go:276] 2 containers: [bb14b3493df5 c78986548653]
	I0826 04:17:47.392389    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:17:47.403161    4157 logs.go:276] 2 containers: [7a6cc2a39c7e 248906c4a556]
	I0826 04:17:47.403232    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:17:47.413746    4157 logs.go:276] 0 containers: []
	W0826 04:17:47.413760    4157 logs.go:278] No container was found matching "kindnet"
	I0826 04:17:47.413816    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:17:47.424142    4157 logs.go:276] 2 containers: [a5fe322a216b d3eeaec7b527]
	I0826 04:17:47.424159    4157 logs.go:123] Gathering logs for dmesg ...
	I0826 04:17:47.424167    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:17:47.429044    4157 logs.go:123] Gathering logs for kube-apiserver [ebaf0ab8ed6e] ...
	I0826 04:17:47.429052    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebaf0ab8ed6e"
	I0826 04:17:47.442853    4157 logs.go:123] Gathering logs for etcd [edd62acc2f9e] ...
	I0826 04:17:47.442864    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edd62acc2f9e"
	I0826 04:17:47.458568    4157 logs.go:123] Gathering logs for kube-proxy [c78986548653] ...
	I0826 04:17:47.458586    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c78986548653"
	I0826 04:17:47.472579    4157 logs.go:123] Gathering logs for kube-controller-manager [248906c4a556] ...
	I0826 04:17:47.472592    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248906c4a556"
	I0826 04:17:47.490148    4157 logs.go:123] Gathering logs for container status ...
	I0826 04:17:47.490166    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:17:47.503506    4157 logs.go:123] Gathering logs for etcd [9bff8c79fce6] ...
	I0826 04:17:47.503519    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bff8c79fce6"
	I0826 04:17:47.518931    4157 logs.go:123] Gathering logs for coredns [1238564fbc88] ...
	I0826 04:17:47.518950    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1238564fbc88"
	I0826 04:17:47.531556    4157 logs.go:123] Gathering logs for kube-scheduler [48dfaf968d22] ...
	I0826 04:17:47.531568    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48dfaf968d22"
	I0826 04:17:47.549289    4157 logs.go:123] Gathering logs for Docker ...
	I0826 04:17:47.549301    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:17:47.573104    4157 logs.go:123] Gathering logs for kubelet ...
	I0826 04:17:47.573120    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:17:47.616427    4157 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:17:47.616448    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:17:47.657614    4157 logs.go:123] Gathering logs for storage-provisioner [d3eeaec7b527] ...
	I0826 04:17:47.657634    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eeaec7b527"
	I0826 04:17:47.670275    4157 logs.go:123] Gathering logs for kube-proxy [bb14b3493df5] ...
	I0826 04:17:47.670290    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb14b3493df5"
	I0826 04:17:47.682787    4157 logs.go:123] Gathering logs for kube-controller-manager [7a6cc2a39c7e] ...
	I0826 04:17:47.682803    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6cc2a39c7e"
	I0826 04:17:47.702029    4157 logs.go:123] Gathering logs for storage-provisioner [a5fe322a216b] ...
	I0826 04:17:47.702047    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5fe322a216b"
	I0826 04:17:47.714667    4157 logs.go:123] Gathering logs for kube-apiserver [728052cc7045] ...
	I0826 04:17:47.714679    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 728052cc7045"
	I0826 04:17:47.754749    4157 logs.go:123] Gathering logs for coredns [cb531344e36e] ...
	I0826 04:17:47.754768    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb531344e36e"
	I0826 04:17:47.769128    4157 logs.go:123] Gathering logs for kube-scheduler [c9ca783bb30a] ...
	I0826 04:17:47.769142    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9ca783bb30a"
	I0826 04:17:50.285553    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:17:55.285709    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:17:55.285800    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:17:55.302106    4157 logs.go:276] 2 containers: [ebaf0ab8ed6e 728052cc7045]
	I0826 04:17:55.302181    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:17:55.312917    4157 logs.go:276] 2 containers: [9bff8c79fce6 edd62acc2f9e]
	I0826 04:17:55.312987    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:17:55.324435    4157 logs.go:276] 2 containers: [1238564fbc88 cb531344e36e]
	I0826 04:17:55.324513    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:17:55.342743    4157 logs.go:276] 2 containers: [48dfaf968d22 c9ca783bb30a]
	I0826 04:17:55.342818    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:17:55.353985    4157 logs.go:276] 2 containers: [bb14b3493df5 c78986548653]
	I0826 04:17:55.354058    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:17:55.368155    4157 logs.go:276] 2 containers: [7a6cc2a39c7e 248906c4a556]
	I0826 04:17:55.368233    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:17:55.384637    4157 logs.go:276] 0 containers: []
	W0826 04:17:55.384650    4157 logs.go:278] No container was found matching "kindnet"
	I0826 04:17:55.384708    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:17:55.399666    4157 logs.go:276] 2 containers: [a5fe322a216b d3eeaec7b527]
	I0826 04:17:55.399683    4157 logs.go:123] Gathering logs for Docker ...
	I0826 04:17:55.399691    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:17:55.423256    4157 logs.go:123] Gathering logs for kubelet ...
	I0826 04:17:55.423272    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:17:55.467794    4157 logs.go:123] Gathering logs for etcd [9bff8c79fce6] ...
	I0826 04:17:55.467820    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bff8c79fce6"
	I0826 04:17:55.482877    4157 logs.go:123] Gathering logs for etcd [edd62acc2f9e] ...
	I0826 04:17:55.482895    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edd62acc2f9e"
	I0826 04:17:55.498936    4157 logs.go:123] Gathering logs for coredns [1238564fbc88] ...
	I0826 04:17:55.498951    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1238564fbc88"
	I0826 04:17:55.514800    4157 logs.go:123] Gathering logs for storage-provisioner [d3eeaec7b527] ...
	I0826 04:17:55.514812    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eeaec7b527"
	I0826 04:17:55.529191    4157 logs.go:123] Gathering logs for kube-apiserver [ebaf0ab8ed6e] ...
	I0826 04:17:55.529201    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebaf0ab8ed6e"
	I0826 04:17:55.543876    4157 logs.go:123] Gathering logs for coredns [cb531344e36e] ...
	I0826 04:17:55.543887    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb531344e36e"
	I0826 04:17:55.555285    4157 logs.go:123] Gathering logs for kube-scheduler [48dfaf968d22] ...
	I0826 04:17:55.555296    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48dfaf968d22"
	I0826 04:17:55.567484    4157 logs.go:123] Gathering logs for kube-controller-manager [7a6cc2a39c7e] ...
	I0826 04:17:55.567496    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6cc2a39c7e"
	I0826 04:17:55.587781    4157 logs.go:123] Gathering logs for kube-scheduler [c9ca783bb30a] ...
	I0826 04:17:55.587794    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9ca783bb30a"
	I0826 04:17:55.602267    4157 logs.go:123] Gathering logs for kube-controller-manager [248906c4a556] ...
	I0826 04:17:55.602277    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248906c4a556"
	I0826 04:17:55.617229    4157 logs.go:123] Gathering logs for storage-provisioner [a5fe322a216b] ...
	I0826 04:17:55.617240    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5fe322a216b"
	I0826 04:17:55.628675    4157 logs.go:123] Gathering logs for kube-proxy [c78986548653] ...
	I0826 04:17:55.628692    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c78986548653"
	I0826 04:17:55.640912    4157 logs.go:123] Gathering logs for container status ...
	I0826 04:17:55.640924    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:17:55.652945    4157 logs.go:123] Gathering logs for dmesg ...
	I0826 04:17:55.652960    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:17:55.657441    4157 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:17:55.657450    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:17:55.690956    4157 logs.go:123] Gathering logs for kube-apiserver [728052cc7045] ...
	I0826 04:17:55.690971    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 728052cc7045"
	I0826 04:17:55.728808    4157 logs.go:123] Gathering logs for kube-proxy [bb14b3493df5] ...
	I0826 04:17:55.728819    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb14b3493df5"
	I0826 04:17:58.242862    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:18:03.245540    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:18:03.245740    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:18:03.264664    4157 logs.go:276] 2 containers: [ebaf0ab8ed6e 728052cc7045]
	I0826 04:18:03.264761    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:18:03.278627    4157 logs.go:276] 2 containers: [9bff8c79fce6 edd62acc2f9e]
	I0826 04:18:03.278701    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:18:03.290883    4157 logs.go:276] 2 containers: [1238564fbc88 cb531344e36e]
	I0826 04:18:03.290957    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:18:03.301285    4157 logs.go:276] 2 containers: [48dfaf968d22 c9ca783bb30a]
	I0826 04:18:03.301356    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:18:03.312441    4157 logs.go:276] 2 containers: [bb14b3493df5 c78986548653]
	I0826 04:18:03.312510    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:18:03.323276    4157 logs.go:276] 2 containers: [7a6cc2a39c7e 248906c4a556]
	I0826 04:18:03.323350    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:18:03.338154    4157 logs.go:276] 0 containers: []
	W0826 04:18:03.338164    4157 logs.go:278] No container was found matching "kindnet"
	I0826 04:18:03.338219    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:18:03.348799    4157 logs.go:276] 2 containers: [a5fe322a216b d3eeaec7b527]
	I0826 04:18:03.348816    4157 logs.go:123] Gathering logs for etcd [9bff8c79fce6] ...
	I0826 04:18:03.348822    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bff8c79fce6"
	I0826 04:18:03.363075    4157 logs.go:123] Gathering logs for coredns [cb531344e36e] ...
	I0826 04:18:03.363087    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb531344e36e"
	I0826 04:18:03.374070    4157 logs.go:123] Gathering logs for kube-scheduler [48dfaf968d22] ...
	I0826 04:18:03.374082    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48dfaf968d22"
	I0826 04:18:03.385710    4157 logs.go:123] Gathering logs for kube-proxy [bb14b3493df5] ...
	I0826 04:18:03.385721    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb14b3493df5"
	I0826 04:18:03.397653    4157 logs.go:123] Gathering logs for storage-provisioner [d3eeaec7b527] ...
	I0826 04:18:03.397667    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eeaec7b527"
	I0826 04:18:03.408876    4157 logs.go:123] Gathering logs for kubelet ...
	I0826 04:18:03.408888    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:18:03.448080    4157 logs.go:123] Gathering logs for kube-apiserver [ebaf0ab8ed6e] ...
	I0826 04:18:03.448095    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebaf0ab8ed6e"
	I0826 04:18:03.461847    4157 logs.go:123] Gathering logs for coredns [1238564fbc88] ...
	I0826 04:18:03.461859    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1238564fbc88"
	I0826 04:18:03.472813    4157 logs.go:123] Gathering logs for kube-proxy [c78986548653] ...
	I0826 04:18:03.472824    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c78986548653"
	I0826 04:18:03.487403    4157 logs.go:123] Gathering logs for kube-controller-manager [248906c4a556] ...
	I0826 04:18:03.487413    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248906c4a556"
	I0826 04:18:03.502824    4157 logs.go:123] Gathering logs for storage-provisioner [a5fe322a216b] ...
	I0826 04:18:03.502835    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5fe322a216b"
	I0826 04:18:03.514965    4157 logs.go:123] Gathering logs for Docker ...
	I0826 04:18:03.514975    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:18:03.538170    4157 logs.go:123] Gathering logs for container status ...
	I0826 04:18:03.538177    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:18:03.549555    4157 logs.go:123] Gathering logs for dmesg ...
	I0826 04:18:03.549565    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:18:03.553889    4157 logs.go:123] Gathering logs for kube-scheduler [c9ca783bb30a] ...
	I0826 04:18:03.553896    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9ca783bb30a"
	I0826 04:18:03.569166    4157 logs.go:123] Gathering logs for kube-controller-manager [7a6cc2a39c7e] ...
	I0826 04:18:03.569179    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6cc2a39c7e"
	I0826 04:18:03.586802    4157 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:18:03.586813    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:18:03.623469    4157 logs.go:123] Gathering logs for etcd [edd62acc2f9e] ...
	I0826 04:18:03.623478    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edd62acc2f9e"
	I0826 04:18:03.641385    4157 logs.go:123] Gathering logs for kube-apiserver [728052cc7045] ...
	I0826 04:18:03.641394    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 728052cc7045"
	I0826 04:18:06.180894    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:18:11.183047    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:18:11.183112    4157 kubeadm.go:597] duration metric: took 4m7.746176083s to restartPrimaryControlPlane
	W0826 04:18:11.183158    4157 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0826 04:18:11.183174    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0826 04:18:12.245260    4157 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.062091459s)
	I0826 04:18:12.245320    4157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 04:18:12.250326    4157 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 04:18:12.253107    4157 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 04:18:12.255969    4157 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 04:18:12.255976    4157 kubeadm.go:157] found existing configuration files:
	
	I0826 04:18:12.255997    4157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50342 /etc/kubernetes/admin.conf
	I0826 04:18:12.258895    4157 kubeadm.go:163] "https://control-plane.minikube.internal:50342" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50342 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 04:18:12.258926    4157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 04:18:12.261631    4157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50342 /etc/kubernetes/kubelet.conf
	I0826 04:18:12.264438    4157 kubeadm.go:163] "https://control-plane.minikube.internal:50342" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50342 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 04:18:12.264461    4157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 04:18:12.267530    4157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50342 /etc/kubernetes/controller-manager.conf
	I0826 04:18:12.270329    4157 kubeadm.go:163] "https://control-plane.minikube.internal:50342" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50342 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 04:18:12.270347    4157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 04:18:12.273009    4157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50342 /etc/kubernetes/scheduler.conf
	I0826 04:18:12.275984    4157 kubeadm.go:163] "https://control-plane.minikube.internal:50342" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50342 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 04:18:12.276007    4157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 04:18:12.279170    4157 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0826 04:18:12.295251    4157 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0826 04:18:12.295283    4157 kubeadm.go:310] [preflight] Running pre-flight checks
	I0826 04:18:12.341800    4157 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0826 04:18:12.341858    4157 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0826 04:18:12.341905    4157 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0826 04:18:12.395882    4157 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0826 04:18:12.403020    4157 out.go:235]   - Generating certificates and keys ...
	I0826 04:18:12.403054    4157 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0826 04:18:12.403087    4157 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0826 04:18:12.403135    4157 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0826 04:18:12.403169    4157 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0826 04:18:12.403206    4157 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0826 04:18:12.403235    4157 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0826 04:18:12.403263    4157 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0826 04:18:12.403298    4157 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0826 04:18:12.403339    4157 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0826 04:18:12.403384    4157 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0826 04:18:12.403407    4157 kubeadm.go:310] [certs] Using the existing "sa" key
	I0826 04:18:12.403444    4157 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0826 04:18:12.631214    4157 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0826 04:18:12.730671    4157 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0826 04:18:12.825431    4157 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0826 04:18:12.896711    4157 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0826 04:18:12.923888    4157 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0826 04:18:12.924148    4157 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0826 04:18:12.924204    4157 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0826 04:18:13.012842    4157 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0826 04:18:13.020044    4157 out.go:235]   - Booting up control plane ...
	I0826 04:18:13.020102    4157 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0826 04:18:13.020135    4157 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0826 04:18:13.020168    4157 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0826 04:18:13.020217    4157 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0826 04:18:13.020302    4157 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0826 04:18:17.518291    4157 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502093 seconds
	I0826 04:18:17.518355    4157 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0826 04:18:17.523203    4157 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0826 04:18:18.050584    4157 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0826 04:18:18.051077    4157 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-798000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0826 04:18:18.556016    4157 kubeadm.go:310] [bootstrap-token] Using token: d61ap3.qag0tebaza2h6e0s
	I0826 04:18:18.558459    4157 out.go:235]   - Configuring RBAC rules ...
	I0826 04:18:18.558517    4157 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0826 04:18:18.558568    4157 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0826 04:18:18.563785    4157 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0826 04:18:18.564479    4157 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0826 04:18:18.565433    4157 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0826 04:18:18.566220    4157 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0826 04:18:18.569239    4157 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0826 04:18:18.741507    4157 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0826 04:18:18.961274    4157 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0826 04:18:18.961737    4157 kubeadm.go:310] 
	I0826 04:18:18.961770    4157 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0826 04:18:18.961774    4157 kubeadm.go:310] 
	I0826 04:18:18.961819    4157 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0826 04:18:18.961832    4157 kubeadm.go:310] 
	I0826 04:18:18.961854    4157 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0826 04:18:18.961885    4157 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0826 04:18:18.961909    4157 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0826 04:18:18.961913    4157 kubeadm.go:310] 
	I0826 04:18:18.961937    4157 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0826 04:18:18.961941    4157 kubeadm.go:310] 
	I0826 04:18:18.961979    4157 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0826 04:18:18.961984    4157 kubeadm.go:310] 
	I0826 04:18:18.962013    4157 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0826 04:18:18.962058    4157 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0826 04:18:18.962100    4157 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0826 04:18:18.962105    4157 kubeadm.go:310] 
	I0826 04:18:18.962150    4157 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0826 04:18:18.962190    4157 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0826 04:18:18.962193    4157 kubeadm.go:310] 
	I0826 04:18:18.962241    4157 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token d61ap3.qag0tebaza2h6e0s \
	I0826 04:18:18.962302    4157 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d48d9f38c6f791d9f71a5057d26eee89e43d0c7594d65171e1ecdad9babf1cb8 \
	I0826 04:18:18.962315    4157 kubeadm.go:310] 	--control-plane 
	I0826 04:18:18.962321    4157 kubeadm.go:310] 
	I0826 04:18:18.962378    4157 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0826 04:18:18.962383    4157 kubeadm.go:310] 
	I0826 04:18:18.962433    4157 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token d61ap3.qag0tebaza2h6e0s \
	I0826 04:18:18.962484    4157 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d48d9f38c6f791d9f71a5057d26eee89e43d0c7594d65171e1ecdad9babf1cb8 
	I0826 04:18:18.962547    4157 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0826 04:18:18.962555    4157 cni.go:84] Creating CNI manager for ""
	I0826 04:18:18.962563    4157 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0826 04:18:18.970116    4157 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0826 04:18:18.974217    4157 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0826 04:18:18.977048    4157 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0826 04:18:18.981934    4157 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0826 04:18:18.981976    4157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 04:18:18.981977    4157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-798000 minikube.k8s.io/updated_at=2024_08_26T04_18_18_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff minikube.k8s.io/name=running-upgrade-798000 minikube.k8s.io/primary=true
	I0826 04:18:18.984979    4157 ops.go:34] apiserver oom_adj: -16
	I0826 04:18:19.023938    4157 kubeadm.go:1113] duration metric: took 41.995042ms to wait for elevateKubeSystemPrivileges
	I0826 04:18:19.026986    4157 kubeadm.go:394] duration metric: took 4m15.603748333s to StartCluster
	I0826 04:18:19.027001    4157 settings.go:142] acquiring lock: {Name:mk86204df15f9319a81c6b97808047ffc9e01022 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 04:18:19.027096    4157 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19501-1045/kubeconfig
	I0826 04:18:19.027500    4157 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19501-1045/kubeconfig: {Name:mk689667536e8273d65b27bdc18d08f46d2d09b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 04:18:19.027708    4157 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0826 04:18:19.027720    4157 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0826 04:18:19.027765    4157 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-798000"
	I0826 04:18:19.027781    4157 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-798000"
	I0826 04:18:19.027796    4157 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-798000"
	I0826 04:18:19.027796    4157 config.go:182] Loaded profile config "running-upgrade-798000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0826 04:18:19.027822    4157 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-798000"
	W0826 04:18:19.027826    4157 addons.go:243] addon storage-provisioner should already be in state true
	I0826 04:18:19.027835    4157 host.go:66] Checking if "running-upgrade-798000" exists ...
	I0826 04:18:19.028684    4157 kapi.go:59] client config for running-upgrade-798000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/running-upgrade-798000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/running-upgrade-798000/client.key", CAFile:"/Users/jenkins/minikube-integration/19501-1045/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1065bbd30), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0826 04:18:19.028815    4157 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-798000"
	W0826 04:18:19.028820    4157 addons.go:243] addon default-storageclass should already be in state true
	I0826 04:18:19.028827    4157 host.go:66] Checking if "running-upgrade-798000" exists ...
	I0826 04:18:19.031982    4157 out.go:177] * Verifying Kubernetes components...
	I0826 04:18:19.032396    4157 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0826 04:18:19.035335    4157 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0826 04:18:19.035341    4157 sshutil.go:53] new ssh client: &{IP:localhost Port:50266 SSHKeyPath:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/running-upgrade-798000/id_rsa Username:docker}
	I0826 04:18:19.039106    4157 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 04:18:19.043049    4157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 04:18:19.046147    4157 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 04:18:19.046153    4157 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0826 04:18:19.046159    4157 sshutil.go:53] new ssh client: &{IP:localhost Port:50266 SSHKeyPath:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/running-upgrade-798000/id_rsa Username:docker}
	I0826 04:18:19.129664    4157 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 04:18:19.135690    4157 api_server.go:52] waiting for apiserver process to appear ...
	I0826 04:18:19.135740    4157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 04:18:19.137442    4157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0826 04:18:19.141000    4157 api_server.go:72] duration metric: took 113.282958ms to wait for apiserver process to appear ...
	I0826 04:18:19.141011    4157 api_server.go:88] waiting for apiserver healthz status ...
	I0826 04:18:19.141017    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:18:19.155711    4157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 04:18:19.468430    4157 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0826 04:18:19.468449    4157 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0826 04:18:24.143045    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:18:24.143108    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:18:29.143609    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:18:29.143667    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:18:34.144033    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:18:34.144058    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:18:39.144498    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:18:39.144543    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:18:44.145226    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:18:44.145268    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:18:49.146096    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:18:49.146131    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0826 04:18:49.470368    4157 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0826 04:18:49.474696    4157 out.go:177] * Enabled addons: storage-provisioner
	I0826 04:18:49.482599    4157 addons.go:510] duration metric: took 30.45538s for enable addons: enabled=[storage-provisioner]
	I0826 04:18:54.147681    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:18:54.147717    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:18:59.149315    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:18:59.149357    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:19:04.150649    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:19:04.150689    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:19:09.152887    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:19:09.152908    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:19:14.154998    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:19:14.155024    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:19:19.157173    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:19:19.157304    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:19:19.169434    4157 logs.go:276] 1 containers: [946570daf38c]
	I0826 04:19:19.169508    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:19:19.181011    4157 logs.go:276] 1 containers: [b708c77ab1a7]
	I0826 04:19:19.181084    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:19:19.191880    4157 logs.go:276] 2 containers: [a290dbe19bc7 93db8db9c2e3]
	I0826 04:19:19.191951    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:19:19.202420    4157 logs.go:276] 1 containers: [65d8fa7f5c50]
	I0826 04:19:19.202494    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:19:19.213016    4157 logs.go:276] 1 containers: [893784fae7df]
	I0826 04:19:19.213083    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:19:19.224560    4157 logs.go:276] 1 containers: [00731d6626be]
	I0826 04:19:19.224627    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:19:19.236094    4157 logs.go:276] 0 containers: []
	W0826 04:19:19.236104    4157 logs.go:278] No container was found matching "kindnet"
	I0826 04:19:19.236161    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:19:19.247606    4157 logs.go:276] 1 containers: [cea2a531fea7]
	I0826 04:19:19.247619    4157 logs.go:123] Gathering logs for kube-proxy [893784fae7df] ...
	I0826 04:19:19.247625    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 893784fae7df"
	I0826 04:19:19.263420    4157 logs.go:123] Gathering logs for Docker ...
	I0826 04:19:19.263432    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:19:19.288938    4157 logs.go:123] Gathering logs for container status ...
	I0826 04:19:19.288947    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:19:19.301227    4157 logs.go:123] Gathering logs for dmesg ...
	I0826 04:19:19.301239    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:19:19.305796    4157 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:19:19.305804    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:19:19.341326    4157 logs.go:123] Gathering logs for kube-apiserver [946570daf38c] ...
	I0826 04:19:19.341337    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 946570daf38c"
	I0826 04:19:19.356380    4157 logs.go:123] Gathering logs for coredns [93db8db9c2e3] ...
	I0826 04:19:19.356391    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93db8db9c2e3"
	I0826 04:19:19.367909    4157 logs.go:123] Gathering logs for kube-scheduler [65d8fa7f5c50] ...
	I0826 04:19:19.367922    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d8fa7f5c50"
	I0826 04:19:19.383326    4157 logs.go:123] Gathering logs for kubelet ...
	I0826 04:19:19.383336    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0826 04:19:19.414696    4157 logs.go:138] Found kubelet problem: Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: W0826 11:18:31.345565   14285 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	W0826 04:19:19.414789    4157 logs.go:138] Found kubelet problem: Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: E0826 11:18:31.345585   14285 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	I0826 04:19:19.417078    4157 logs.go:123] Gathering logs for etcd [b708c77ab1a7] ...
	I0826 04:19:19.417084    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b708c77ab1a7"
	I0826 04:19:19.433540    4157 logs.go:123] Gathering logs for coredns [a290dbe19bc7] ...
	I0826 04:19:19.433549    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a290dbe19bc7"
	I0826 04:19:19.451169    4157 logs.go:123] Gathering logs for kube-controller-manager [00731d6626be] ...
	I0826 04:19:19.451180    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00731d6626be"
	I0826 04:19:19.468285    4157 logs.go:123] Gathering logs for storage-provisioner [cea2a531fea7] ...
	I0826 04:19:19.468298    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cea2a531fea7"
	I0826 04:19:19.480240    4157 out.go:358] Setting ErrFile to fd 2...
	I0826 04:19:19.480250    4157 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0826 04:19:19.480279    4157 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0826 04:19:19.480283    4157 out.go:270]   Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: W0826 11:18:31.345565   14285 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	  Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: W0826 11:18:31.345565   14285 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	W0826 04:19:19.480286    4157 out.go:270]   Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: E0826 11:18:31.345585   14285 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	  Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: E0826 11:18:31.345585   14285 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	I0826 04:19:19.480291    4157 out.go:358] Setting ErrFile to fd 2...
	I0826 04:19:19.480293    4157 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:19:29.484252    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:19:34.486475    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0826 04:19:34.486561    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:19:34.497291    4157 logs.go:276] 1 containers: [946570daf38c]
	I0826 04:19:34.497356    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:19:34.508260    4157 logs.go:276] 1 containers: [b708c77ab1a7]
	I0826 04:19:34.508323    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:19:34.519630    4157 logs.go:276] 2 containers: [a290dbe19bc7 93db8db9c2e3]
	I0826 04:19:34.519705    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:19:34.530961    4157 logs.go:276] 1 containers: [65d8fa7f5c50]
	I0826 04:19:34.531032    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:19:34.542701    4157 logs.go:276] 1 containers: [893784fae7df]
	I0826 04:19:34.542766    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:19:34.554297    4157 logs.go:276] 1 containers: [00731d6626be]
	I0826 04:19:34.554362    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:19:34.565686    4157 logs.go:276] 0 containers: []
	W0826 04:19:34.565696    4157 logs.go:278] No container was found matching "kindnet"
	I0826 04:19:34.565751    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:19:34.575673    4157 logs.go:276] 1 containers: [cea2a531fea7]
	I0826 04:19:34.575689    4157 logs.go:123] Gathering logs for kube-scheduler [65d8fa7f5c50] ...
	I0826 04:19:34.575695    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d8fa7f5c50"
	I0826 04:19:34.590451    4157 logs.go:123] Gathering logs for storage-provisioner [cea2a531fea7] ...
	I0826 04:19:34.590462    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cea2a531fea7"
	I0826 04:19:34.603522    4157 logs.go:123] Gathering logs for container status ...
	I0826 04:19:34.603535    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:19:34.615611    4157 logs.go:123] Gathering logs for kubelet ...
	I0826 04:19:34.615623    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0826 04:19:34.648431    4157 logs.go:138] Found kubelet problem: Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: W0826 11:18:31.345565   14285 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	W0826 04:19:34.648526    4157 logs.go:138] Found kubelet problem: Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: E0826 11:18:31.345585   14285 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	I0826 04:19:34.650795    4157 logs.go:123] Gathering logs for dmesg ...
	I0826 04:19:34.650800    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:19:34.656052    4157 logs.go:123] Gathering logs for kube-apiserver [946570daf38c] ...
	I0826 04:19:34.656062    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 946570daf38c"
	I0826 04:19:34.670284    4157 logs.go:123] Gathering logs for etcd [b708c77ab1a7] ...
	I0826 04:19:34.670299    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b708c77ab1a7"
	I0826 04:19:34.685572    4157 logs.go:123] Gathering logs for coredns [a290dbe19bc7] ...
	I0826 04:19:34.685584    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a290dbe19bc7"
	I0826 04:19:34.696637    4157 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:19:34.696651    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:19:34.737292    4157 logs.go:123] Gathering logs for coredns [93db8db9c2e3] ...
	I0826 04:19:34.737305    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93db8db9c2e3"
	I0826 04:19:34.749129    4157 logs.go:123] Gathering logs for kube-proxy [893784fae7df] ...
	I0826 04:19:34.749140    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 893784fae7df"
	I0826 04:19:34.760941    4157 logs.go:123] Gathering logs for kube-controller-manager [00731d6626be] ...
	I0826 04:19:34.760954    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00731d6626be"
	I0826 04:19:34.778868    4157 logs.go:123] Gathering logs for Docker ...
	I0826 04:19:34.778882    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:19:34.803212    4157 out.go:358] Setting ErrFile to fd 2...
	I0826 04:19:34.803225    4157 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0826 04:19:34.803250    4157 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0826 04:19:34.803256    4157 out.go:270]   Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: W0826 11:18:31.345565   14285 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	  Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: W0826 11:18:31.345565   14285 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	W0826 04:19:34.803259    4157 out.go:270]   Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: E0826 11:18:31.345585   14285 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	  Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: E0826 11:18:31.345585   14285 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	I0826 04:19:34.803264    4157 out.go:358] Setting ErrFile to fd 2...
	I0826 04:19:34.803266    4157 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:19:44.807233    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:19:49.809594    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:19:49.809935    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:19:49.843539    4157 logs.go:276] 1 containers: [946570daf38c]
	I0826 04:19:49.843676    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:19:49.863463    4157 logs.go:276] 1 containers: [b708c77ab1a7]
	I0826 04:19:49.863560    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:19:49.878724    4157 logs.go:276] 2 containers: [a290dbe19bc7 93db8db9c2e3]
	I0826 04:19:49.878800    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:19:49.893880    4157 logs.go:276] 1 containers: [65d8fa7f5c50]
	I0826 04:19:49.893947    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:19:49.905549    4157 logs.go:276] 1 containers: [893784fae7df]
	I0826 04:19:49.905625    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:19:49.916799    4157 logs.go:276] 1 containers: [00731d6626be]
	I0826 04:19:49.916875    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:19:49.927948    4157 logs.go:276] 0 containers: []
	W0826 04:19:49.927959    4157 logs.go:278] No container was found matching "kindnet"
	I0826 04:19:49.928016    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:19:49.942524    4157 logs.go:276] 1 containers: [cea2a531fea7]
	I0826 04:19:49.942538    4157 logs.go:123] Gathering logs for coredns [93db8db9c2e3] ...
	I0826 04:19:49.942544    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93db8db9c2e3"
	I0826 04:19:49.955241    4157 logs.go:123] Gathering logs for kube-scheduler [65d8fa7f5c50] ...
	I0826 04:19:49.955250    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d8fa7f5c50"
	I0826 04:19:49.970998    4157 logs.go:123] Gathering logs for kube-controller-manager [00731d6626be] ...
	I0826 04:19:49.971013    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00731d6626be"
	I0826 04:19:49.989875    4157 logs.go:123] Gathering logs for storage-provisioner [cea2a531fea7] ...
	I0826 04:19:49.989891    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cea2a531fea7"
	I0826 04:19:50.002323    4157 logs.go:123] Gathering logs for kubelet ...
	I0826 04:19:50.002335    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0826 04:19:50.035431    4157 logs.go:138] Found kubelet problem: Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: W0826 11:18:31.345565   14285 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	W0826 04:19:50.035528    4157 logs.go:138] Found kubelet problem: Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: E0826 11:18:31.345585   14285 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	I0826 04:19:50.037954    4157 logs.go:123] Gathering logs for dmesg ...
	I0826 04:19:50.037962    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:19:50.042701    4157 logs.go:123] Gathering logs for etcd [b708c77ab1a7] ...
	I0826 04:19:50.042712    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b708c77ab1a7"
	I0826 04:19:50.057269    4157 logs.go:123] Gathering logs for coredns [a290dbe19bc7] ...
	I0826 04:19:50.057282    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a290dbe19bc7"
	I0826 04:19:50.069845    4157 logs.go:123] Gathering logs for Docker ...
	I0826 04:19:50.069857    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:19:50.099213    4157 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:19:50.099228    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:19:50.137319    4157 logs.go:123] Gathering logs for kube-apiserver [946570daf38c] ...
	I0826 04:19:50.137332    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 946570daf38c"
	I0826 04:19:50.153109    4157 logs.go:123] Gathering logs for kube-proxy [893784fae7df] ...
	I0826 04:19:50.153120    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 893784fae7df"
	I0826 04:19:50.167256    4157 logs.go:123] Gathering logs for container status ...
	I0826 04:19:50.167268    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:19:50.179808    4157 out.go:358] Setting ErrFile to fd 2...
	I0826 04:19:50.179818    4157 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0826 04:19:50.179844    4157 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0826 04:19:50.179852    4157 out.go:270]   Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: W0826 11:18:31.345565   14285 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	  Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: W0826 11:18:31.345565   14285 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	W0826 04:19:50.179855    4157 out.go:270]   Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: E0826 11:18:31.345585   14285 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	  Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: E0826 11:18:31.345585   14285 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	I0826 04:19:50.179859    4157 out.go:358] Setting ErrFile to fd 2...
	I0826 04:19:50.179862    4157 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:20:00.182835    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:20:05.183885    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:20:05.184144    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:20:05.207755    4157 logs.go:276] 1 containers: [946570daf38c]
	I0826 04:20:05.207865    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:20:05.227184    4157 logs.go:276] 1 containers: [b708c77ab1a7]
	I0826 04:20:05.227262    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:20:05.240649    4157 logs.go:276] 2 containers: [a290dbe19bc7 93db8db9c2e3]
	I0826 04:20:05.240731    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:20:05.251710    4157 logs.go:276] 1 containers: [65d8fa7f5c50]
	I0826 04:20:05.251784    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:20:05.262400    4157 logs.go:276] 1 containers: [893784fae7df]
	I0826 04:20:05.262470    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:20:05.273446    4157 logs.go:276] 1 containers: [00731d6626be]
	I0826 04:20:05.273517    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:20:05.283867    4157 logs.go:276] 0 containers: []
	W0826 04:20:05.283883    4157 logs.go:278] No container was found matching "kindnet"
	I0826 04:20:05.283941    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:20:05.294897    4157 logs.go:276] 1 containers: [cea2a531fea7]
	I0826 04:20:05.294912    4157 logs.go:123] Gathering logs for kube-controller-manager [00731d6626be] ...
	I0826 04:20:05.294917    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00731d6626be"
	I0826 04:20:05.314565    4157 logs.go:123] Gathering logs for Docker ...
	I0826 04:20:05.314578    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:20:05.338618    4157 logs.go:123] Gathering logs for container status ...
	I0826 04:20:05.338628    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:20:05.350474    4157 logs.go:123] Gathering logs for kubelet ...
	I0826 04:20:05.350487    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0826 04:20:05.383303    4157 logs.go:138] Found kubelet problem: Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: W0826 11:18:31.345565   14285 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	W0826 04:20:05.383405    4157 logs.go:138] Found kubelet problem: Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: E0826 11:18:31.345585   14285 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	I0826 04:20:05.385820    4157 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:20:05.385829    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:20:05.422231    4157 logs.go:123] Gathering logs for kube-scheduler [65d8fa7f5c50] ...
	I0826 04:20:05.422243    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d8fa7f5c50"
	I0826 04:20:05.438140    4157 logs.go:123] Gathering logs for kube-proxy [893784fae7df] ...
	I0826 04:20:05.438152    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 893784fae7df"
	I0826 04:20:05.449858    4157 logs.go:123] Gathering logs for coredns [93db8db9c2e3] ...
	I0826 04:20:05.449869    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93db8db9c2e3"
	I0826 04:20:05.461833    4157 logs.go:123] Gathering logs for storage-provisioner [cea2a531fea7] ...
	I0826 04:20:05.461843    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cea2a531fea7"
	I0826 04:20:05.473917    4157 logs.go:123] Gathering logs for dmesg ...
	I0826 04:20:05.473928    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:20:05.478436    4157 logs.go:123] Gathering logs for kube-apiserver [946570daf38c] ...
	I0826 04:20:05.478444    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 946570daf38c"
	I0826 04:20:05.493187    4157 logs.go:123] Gathering logs for etcd [b708c77ab1a7] ...
	I0826 04:20:05.493198    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b708c77ab1a7"
	I0826 04:20:05.507044    4157 logs.go:123] Gathering logs for coredns [a290dbe19bc7] ...
	I0826 04:20:05.507054    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a290dbe19bc7"
	I0826 04:20:05.518660    4157 out.go:358] Setting ErrFile to fd 2...
	I0826 04:20:05.518670    4157 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0826 04:20:05.518697    4157 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0826 04:20:05.518703    4157 out.go:270]   Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: W0826 11:18:31.345565   14285 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	  Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: W0826 11:18:31.345565   14285 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	W0826 04:20:05.518707    4157 out.go:270]   Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: E0826 11:18:31.345585   14285 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	  Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: E0826 11:18:31.345585   14285 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	I0826 04:20:05.518710    4157 out.go:358] Setting ErrFile to fd 2...
	I0826 04:20:05.518714    4157 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:20:15.522717    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:20:20.524915    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:20:20.525051    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:20:20.540857    4157 logs.go:276] 1 containers: [946570daf38c]
	I0826 04:20:20.540943    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:20:20.558891    4157 logs.go:276] 1 containers: [b708c77ab1a7]
	I0826 04:20:20.558964    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:20:20.570560    4157 logs.go:276] 2 containers: [a290dbe19bc7 93db8db9c2e3]
	I0826 04:20:20.570628    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:20:20.583206    4157 logs.go:276] 1 containers: [65d8fa7f5c50]
	I0826 04:20:20.583272    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:20:20.594176    4157 logs.go:276] 1 containers: [893784fae7df]
	I0826 04:20:20.594245    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:20:20.605161    4157 logs.go:276] 1 containers: [00731d6626be]
	I0826 04:20:20.605225    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:20:20.615798    4157 logs.go:276] 0 containers: []
	W0826 04:20:20.615813    4157 logs.go:278] No container was found matching "kindnet"
	I0826 04:20:20.615873    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:20:20.627039    4157 logs.go:276] 1 containers: [cea2a531fea7]
	I0826 04:20:20.627056    4157 logs.go:123] Gathering logs for kube-scheduler [65d8fa7f5c50] ...
	I0826 04:20:20.627061    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d8fa7f5c50"
	I0826 04:20:20.642930    4157 logs.go:123] Gathering logs for kube-proxy [893784fae7df] ...
	I0826 04:20:20.642941    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 893784fae7df"
	I0826 04:20:20.655016    4157 logs.go:123] Gathering logs for kube-controller-manager [00731d6626be] ...
	I0826 04:20:20.655026    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00731d6626be"
	I0826 04:20:20.672701    4157 logs.go:123] Gathering logs for storage-provisioner [cea2a531fea7] ...
	I0826 04:20:20.672713    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cea2a531fea7"
	I0826 04:20:20.684726    4157 logs.go:123] Gathering logs for container status ...
	I0826 04:20:20.684738    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:20:20.696552    4157 logs.go:123] Gathering logs for kubelet ...
	I0826 04:20:20.696564    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0826 04:20:20.728565    4157 logs.go:138] Found kubelet problem: Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: W0826 11:18:31.345565   14285 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	W0826 04:20:20.728665    4157 logs.go:138] Found kubelet problem: Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: E0826 11:18:31.345585   14285 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	I0826 04:20:20.731049    4157 logs.go:123] Gathering logs for dmesg ...
	I0826 04:20:20.731055    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:20:20.736046    4157 logs.go:123] Gathering logs for etcd [b708c77ab1a7] ...
	I0826 04:20:20.736058    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b708c77ab1a7"
	I0826 04:20:20.750771    4157 logs.go:123] Gathering logs for coredns [93db8db9c2e3] ...
	I0826 04:20:20.750785    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93db8db9c2e3"
	I0826 04:20:20.762905    4157 logs.go:123] Gathering logs for Docker ...
	I0826 04:20:20.762920    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:20:20.786825    4157 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:20:20.786836    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:20:20.823793    4157 logs.go:123] Gathering logs for kube-apiserver [946570daf38c] ...
	I0826 04:20:20.823805    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 946570daf38c"
	I0826 04:20:20.840715    4157 logs.go:123] Gathering logs for coredns [a290dbe19bc7] ...
	I0826 04:20:20.840726    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a290dbe19bc7"
	I0826 04:20:20.853454    4157 out.go:358] Setting ErrFile to fd 2...
	I0826 04:20:20.853464    4157 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0826 04:20:20.853490    4157 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0826 04:20:20.853495    4157 out.go:270]   Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: W0826 11:18:31.345565   14285 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	  Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: W0826 11:18:31.345565   14285 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	W0826 04:20:20.853504    4157 out.go:270]   Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: E0826 11:18:31.345585   14285 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	  Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: E0826 11:18:31.345585   14285 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	I0826 04:20:20.853509    4157 out.go:358] Setting ErrFile to fd 2...
	I0826 04:20:20.853512    4157 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:20:30.857539    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:20:35.858648    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:20:35.858846    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:20:35.882642    4157 logs.go:276] 1 containers: [946570daf38c]
	I0826 04:20:35.882744    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:20:35.899067    4157 logs.go:276] 1 containers: [b708c77ab1a7]
	I0826 04:20:35.899160    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:20:35.912264    4157 logs.go:276] 4 containers: [7fe198277c67 ae40de6d158d a290dbe19bc7 93db8db9c2e3]
	I0826 04:20:35.912335    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:20:35.923676    4157 logs.go:276] 1 containers: [65d8fa7f5c50]
	I0826 04:20:35.923731    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:20:35.934828    4157 logs.go:276] 1 containers: [893784fae7df]
	I0826 04:20:35.934899    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:20:35.945671    4157 logs.go:276] 1 containers: [00731d6626be]
	I0826 04:20:35.945749    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:20:35.956074    4157 logs.go:276] 0 containers: []
	W0826 04:20:35.956086    4157 logs.go:278] No container was found matching "kindnet"
	I0826 04:20:35.956143    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:20:35.966952    4157 logs.go:276] 1 containers: [cea2a531fea7]
	I0826 04:20:35.966971    4157 logs.go:123] Gathering logs for dmesg ...
	I0826 04:20:35.966976    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:20:35.972149    4157 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:20:35.972155    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:20:36.007649    4157 logs.go:123] Gathering logs for kube-apiserver [946570daf38c] ...
	I0826 04:20:36.007661    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 946570daf38c"
	I0826 04:20:36.023278    4157 logs.go:123] Gathering logs for coredns [7fe198277c67] ...
	I0826 04:20:36.023289    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fe198277c67"
	I0826 04:20:36.035479    4157 logs.go:123] Gathering logs for coredns [ae40de6d158d] ...
	I0826 04:20:36.035491    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae40de6d158d"
	I0826 04:20:36.047153    4157 logs.go:123] Gathering logs for storage-provisioner [cea2a531fea7] ...
	I0826 04:20:36.047164    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cea2a531fea7"
	I0826 04:20:36.059341    4157 logs.go:123] Gathering logs for Docker ...
	I0826 04:20:36.059353    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:20:36.085544    4157 logs.go:123] Gathering logs for etcd [b708c77ab1a7] ...
	I0826 04:20:36.085554    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b708c77ab1a7"
	I0826 04:20:36.102424    4157 logs.go:123] Gathering logs for coredns [a290dbe19bc7] ...
	I0826 04:20:36.102438    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a290dbe19bc7"
	I0826 04:20:36.114522    4157 logs.go:123] Gathering logs for coredns [93db8db9c2e3] ...
	I0826 04:20:36.114535    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93db8db9c2e3"
	I0826 04:20:36.126888    4157 logs.go:123] Gathering logs for kube-scheduler [65d8fa7f5c50] ...
	I0826 04:20:36.126902    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d8fa7f5c50"
	I0826 04:20:36.142050    4157 logs.go:123] Gathering logs for kube-controller-manager [00731d6626be] ...
	I0826 04:20:36.142060    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00731d6626be"
	I0826 04:20:36.161307    4157 logs.go:123] Gathering logs for container status ...
	I0826 04:20:36.161318    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:20:36.173519    4157 logs.go:123] Gathering logs for kubelet ...
	I0826 04:20:36.173533    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0826 04:20:36.205819    4157 logs.go:138] Found kubelet problem: Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: W0826 11:18:31.345565   14285 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	W0826 04:20:36.205913    4157 logs.go:138] Found kubelet problem: Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: E0826 11:18:31.345585   14285 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	I0826 04:20:36.208179    4157 logs.go:123] Gathering logs for kube-proxy [893784fae7df] ...
	I0826 04:20:36.208184    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 893784fae7df"
	I0826 04:20:36.224092    4157 out.go:358] Setting ErrFile to fd 2...
	I0826 04:20:36.224105    4157 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0826 04:20:36.224129    4157 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0826 04:20:36.224135    4157 out.go:270]   Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: W0826 11:18:31.345565   14285 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	  Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: W0826 11:18:31.345565   14285 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	W0826 04:20:36.224138    4157 out.go:270]   Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: E0826 11:18:31.345585   14285 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	  Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: E0826 11:18:31.345585   14285 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	I0826 04:20:36.224142    4157 out.go:358] Setting ErrFile to fd 2...
	I0826 04:20:36.224147    4157 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:20:46.227116    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:20:51.229443    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:20:51.229677    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:20:51.255515    4157 logs.go:276] 1 containers: [946570daf38c]
	I0826 04:20:51.255627    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:20:51.272560    4157 logs.go:276] 1 containers: [b708c77ab1a7]
	I0826 04:20:51.272638    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:20:51.285877    4157 logs.go:276] 4 containers: [7fe198277c67 ae40de6d158d a290dbe19bc7 93db8db9c2e3]
	I0826 04:20:51.285959    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:20:51.302458    4157 logs.go:276] 1 containers: [65d8fa7f5c50]
	I0826 04:20:51.302528    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:20:51.312779    4157 logs.go:276] 1 containers: [893784fae7df]
	I0826 04:20:51.312846    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:20:51.323779    4157 logs.go:276] 1 containers: [00731d6626be]
	I0826 04:20:51.323845    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:20:51.334201    4157 logs.go:276] 0 containers: []
	W0826 04:20:51.334212    4157 logs.go:278] No container was found matching "kindnet"
	I0826 04:20:51.334272    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:20:51.344291    4157 logs.go:276] 1 containers: [cea2a531fea7]
	I0826 04:20:51.344310    4157 logs.go:123] Gathering logs for coredns [7fe198277c67] ...
	I0826 04:20:51.344316    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fe198277c67"
	I0826 04:20:51.355776    4157 logs.go:123] Gathering logs for kube-scheduler [65d8fa7f5c50] ...
	I0826 04:20:51.355790    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d8fa7f5c50"
	I0826 04:20:51.371069    4157 logs.go:123] Gathering logs for container status ...
	I0826 04:20:51.371081    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:20:51.383250    4157 logs.go:123] Gathering logs for Docker ...
	I0826 04:20:51.383260    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:20:51.408989    4157 logs.go:123] Gathering logs for kubelet ...
	I0826 04:20:51.408997    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0826 04:20:51.443469    4157 logs.go:138] Found kubelet problem: Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: W0826 11:18:31.345565   14285 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	W0826 04:20:51.443563    4157 logs.go:138] Found kubelet problem: Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: E0826 11:18:31.345585   14285 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	I0826 04:20:51.445829    4157 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:20:51.445835    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:20:51.481111    4157 logs.go:123] Gathering logs for etcd [b708c77ab1a7] ...
	I0826 04:20:51.481126    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b708c77ab1a7"
	I0826 04:20:51.495162    4157 logs.go:123] Gathering logs for coredns [ae40de6d158d] ...
	I0826 04:20:51.495175    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae40de6d158d"
	I0826 04:20:51.507489    4157 logs.go:123] Gathering logs for coredns [a290dbe19bc7] ...
	I0826 04:20:51.507501    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a290dbe19bc7"
	I0826 04:20:51.523203    4157 logs.go:123] Gathering logs for coredns [93db8db9c2e3] ...
	I0826 04:20:51.523213    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93db8db9c2e3"
	I0826 04:20:51.535234    4157 logs.go:123] Gathering logs for kube-apiserver [946570daf38c] ...
	I0826 04:20:51.535246    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 946570daf38c"
	I0826 04:20:51.550736    4157 logs.go:123] Gathering logs for kube-proxy [893784fae7df] ...
	I0826 04:20:51.550748    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 893784fae7df"
	I0826 04:20:51.562197    4157 logs.go:123] Gathering logs for dmesg ...
	I0826 04:20:51.562209    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:20:51.566599    4157 logs.go:123] Gathering logs for kube-controller-manager [00731d6626be] ...
	I0826 04:20:51.566608    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00731d6626be"
	I0826 04:20:51.584946    4157 logs.go:123] Gathering logs for storage-provisioner [cea2a531fea7] ...
	I0826 04:20:51.584958    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cea2a531fea7"
	I0826 04:20:51.596598    4157 out.go:358] Setting ErrFile to fd 2...
	I0826 04:20:51.596609    4157 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0826 04:20:51.596634    4157 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0826 04:20:51.596639    4157 out.go:270]   Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: W0826 11:18:31.345565   14285 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	  Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: W0826 11:18:31.345565   14285 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	W0826 04:20:51.596643    4157 out.go:270]   Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: E0826 11:18:31.345585   14285 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	  Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: E0826 11:18:31.345585   14285 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	I0826 04:20:51.596647    4157 out.go:358] Setting ErrFile to fd 2...
	I0826 04:20:51.596650    4157 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:21:01.600592    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:21:06.602411    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:21:06.602837    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:21:06.643144    4157 logs.go:276] 1 containers: [946570daf38c]
	I0826 04:21:06.643260    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:21:06.666650    4157 logs.go:276] 1 containers: [b708c77ab1a7]
	I0826 04:21:06.666736    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:21:06.680863    4157 logs.go:276] 4 containers: [7fe198277c67 ae40de6d158d a290dbe19bc7 93db8db9c2e3]
	I0826 04:21:06.680936    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:21:06.692595    4157 logs.go:276] 1 containers: [65d8fa7f5c50]
	I0826 04:21:06.692663    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:21:06.704094    4157 logs.go:276] 1 containers: [893784fae7df]
	I0826 04:21:06.704159    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:21:06.714922    4157 logs.go:276] 1 containers: [00731d6626be]
	I0826 04:21:06.714996    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:21:06.727619    4157 logs.go:276] 0 containers: []
	W0826 04:21:06.727632    4157 logs.go:278] No container was found matching "kindnet"
	I0826 04:21:06.727692    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:21:06.738952    4157 logs.go:276] 1 containers: [cea2a531fea7]
	I0826 04:21:06.738972    4157 logs.go:123] Gathering logs for kubelet ...
	I0826 04:21:06.738978    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0826 04:21:06.772617    4157 logs.go:138] Found kubelet problem: Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: W0826 11:18:31.345565   14285 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	W0826 04:21:06.772713    4157 logs.go:138] Found kubelet problem: Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: E0826 11:18:31.345585   14285 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	I0826 04:21:06.775125    4157 logs.go:123] Gathering logs for dmesg ...
	I0826 04:21:06.775133    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:21:06.780216    4157 logs.go:123] Gathering logs for coredns [7fe198277c67] ...
	I0826 04:21:06.780226    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fe198277c67"
	I0826 04:21:06.791922    4157 logs.go:123] Gathering logs for storage-provisioner [cea2a531fea7] ...
	I0826 04:21:06.791933    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cea2a531fea7"
	I0826 04:21:06.803327    4157 logs.go:123] Gathering logs for container status ...
	I0826 04:21:06.803339    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:21:06.821419    4157 logs.go:123] Gathering logs for Docker ...
	I0826 04:21:06.821432    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:21:06.848076    4157 logs.go:123] Gathering logs for kube-apiserver [946570daf38c] ...
	I0826 04:21:06.848088    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 946570daf38c"
	I0826 04:21:06.863154    4157 logs.go:123] Gathering logs for coredns [93db8db9c2e3] ...
	I0826 04:21:06.863166    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93db8db9c2e3"
	I0826 04:21:06.875402    4157 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:21:06.875415    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:21:06.910061    4157 logs.go:123] Gathering logs for etcd [b708c77ab1a7] ...
	I0826 04:21:06.910074    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b708c77ab1a7"
	I0826 04:21:06.926288    4157 logs.go:123] Gathering logs for coredns [ae40de6d158d] ...
	I0826 04:21:06.926299    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae40de6d158d"
	I0826 04:21:06.938249    4157 logs.go:123] Gathering logs for coredns [a290dbe19bc7] ...
	I0826 04:21:06.938263    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a290dbe19bc7"
	I0826 04:21:06.950025    4157 logs.go:123] Gathering logs for kube-scheduler [65d8fa7f5c50] ...
	I0826 04:21:06.950034    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d8fa7f5c50"
	I0826 04:21:06.965391    4157 logs.go:123] Gathering logs for kube-proxy [893784fae7df] ...
	I0826 04:21:06.965404    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 893784fae7df"
	I0826 04:21:06.981556    4157 logs.go:123] Gathering logs for kube-controller-manager [00731d6626be] ...
	I0826 04:21:06.981568    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00731d6626be"
	I0826 04:21:07.010033    4157 out.go:358] Setting ErrFile to fd 2...
	I0826 04:21:07.010044    4157 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0826 04:21:07.010070    4157 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0826 04:21:07.010075    4157 out.go:270]   Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: W0826 11:18:31.345565   14285 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	  Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: W0826 11:18:31.345565   14285 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	W0826 04:21:07.010080    4157 out.go:270]   Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: E0826 11:18:31.345585   14285 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	  Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: E0826 11:18:31.345585   14285 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	I0826 04:21:07.010088    4157 out.go:358] Setting ErrFile to fd 2...
	I0826 04:21:07.010091    4157 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:21:17.012644    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:21:22.009326    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:21:22.009417    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:21:22.020939    4157 logs.go:276] 1 containers: [946570daf38c]
	I0826 04:21:22.021008    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:21:22.032294    4157 logs.go:276] 1 containers: [b708c77ab1a7]
	I0826 04:21:22.032371    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:21:22.042772    4157 logs.go:276] 4 containers: [7fe198277c67 ae40de6d158d a290dbe19bc7 93db8db9c2e3]
	I0826 04:21:22.042849    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:21:22.053143    4157 logs.go:276] 1 containers: [65d8fa7f5c50]
	I0826 04:21:22.053208    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:21:22.063425    4157 logs.go:276] 1 containers: [893784fae7df]
	I0826 04:21:22.063521    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:21:22.073866    4157 logs.go:276] 1 containers: [00731d6626be]
	I0826 04:21:22.073929    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:21:22.084202    4157 logs.go:276] 0 containers: []
	W0826 04:21:22.084217    4157 logs.go:278] No container was found matching "kindnet"
	I0826 04:21:22.084269    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:21:22.095019    4157 logs.go:276] 1 containers: [cea2a531fea7]
	I0826 04:21:22.095038    4157 logs.go:123] Gathering logs for coredns [93db8db9c2e3] ...
	I0826 04:21:22.095042    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93db8db9c2e3"
	I0826 04:21:22.106994    4157 logs.go:123] Gathering logs for coredns [ae40de6d158d] ...
	I0826 04:21:22.107005    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae40de6d158d"
	I0826 04:21:22.126913    4157 logs.go:123] Gathering logs for coredns [a290dbe19bc7] ...
	I0826 04:21:22.126927    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a290dbe19bc7"
	I0826 04:21:22.138519    4157 logs.go:123] Gathering logs for kube-scheduler [65d8fa7f5c50] ...
	I0826 04:21:22.138530    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d8fa7f5c50"
	I0826 04:21:22.153518    4157 logs.go:123] Gathering logs for kube-proxy [893784fae7df] ...
	I0826 04:21:22.153531    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 893784fae7df"
	I0826 04:21:22.165920    4157 logs.go:123] Gathering logs for kube-controller-manager [00731d6626be] ...
	I0826 04:21:22.165930    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00731d6626be"
	I0826 04:21:22.183108    4157 logs.go:123] Gathering logs for storage-provisioner [cea2a531fea7] ...
	I0826 04:21:22.183121    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cea2a531fea7"
	I0826 04:21:22.194377    4157 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:21:22.194390    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:21:22.230020    4157 logs.go:123] Gathering logs for kube-apiserver [946570daf38c] ...
	I0826 04:21:22.230032    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 946570daf38c"
	I0826 04:21:22.244255    4157 logs.go:123] Gathering logs for etcd [b708c77ab1a7] ...
	I0826 04:21:22.244266    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b708c77ab1a7"
	I0826 04:21:22.258159    4157 logs.go:123] Gathering logs for coredns [7fe198277c67] ...
	I0826 04:21:22.258176    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fe198277c67"
	I0826 04:21:22.269693    4157 logs.go:123] Gathering logs for Docker ...
	I0826 04:21:22.269706    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:21:22.293666    4157 logs.go:123] Gathering logs for kubelet ...
	I0826 04:21:22.293681    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0826 04:21:22.325473    4157 logs.go:138] Found kubelet problem: Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: W0826 11:18:31.345565   14285 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	W0826 04:21:22.325566    4157 logs.go:138] Found kubelet problem: Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: E0826 11:18:31.345585   14285 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	I0826 04:21:22.327840    4157 logs.go:123] Gathering logs for dmesg ...
	I0826 04:21:22.327845    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:21:22.332096    4157 logs.go:123] Gathering logs for container status ...
	I0826 04:21:22.332102    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:21:22.343240    4157 out.go:358] Setting ErrFile to fd 2...
	I0826 04:21:22.343251    4157 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0826 04:21:22.343279    4157 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0826 04:21:22.343284    4157 out.go:270]   Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: W0826 11:18:31.345565   14285 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	  Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: W0826 11:18:31.345565   14285 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	W0826 04:21:22.343288    4157 out.go:270]   Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: E0826 11:18:31.345585   14285 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	  Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: E0826 11:18:31.345585   14285 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	I0826 04:21:22.343292    4157 out.go:358] Setting ErrFile to fd 2...
	I0826 04:21:22.343295    4157 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:21:32.338938    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:21:37.338961    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:21:37.339119    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:21:37.353777    4157 logs.go:276] 1 containers: [946570daf38c]
	I0826 04:21:37.353866    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:21:37.365304    4157 logs.go:276] 1 containers: [b708c77ab1a7]
	I0826 04:21:37.365372    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:21:37.376134    4157 logs.go:276] 4 containers: [7fe198277c67 ae40de6d158d a290dbe19bc7 93db8db9c2e3]
	I0826 04:21:37.376208    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:21:37.386761    4157 logs.go:276] 1 containers: [65d8fa7f5c50]
	I0826 04:21:37.386838    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:21:37.397552    4157 logs.go:276] 1 containers: [893784fae7df]
	I0826 04:21:37.397619    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:21:37.408494    4157 logs.go:276] 1 containers: [00731d6626be]
	I0826 04:21:37.408561    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:21:37.423032    4157 logs.go:276] 0 containers: []
	W0826 04:21:37.423043    4157 logs.go:278] No container was found matching "kindnet"
	I0826 04:21:37.423099    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:21:37.433216    4157 logs.go:276] 1 containers: [cea2a531fea7]
	I0826 04:21:37.433232    4157 logs.go:123] Gathering logs for storage-provisioner [cea2a531fea7] ...
	I0826 04:21:37.433237    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cea2a531fea7"
	I0826 04:21:37.444579    4157 logs.go:123] Gathering logs for kube-apiserver [946570daf38c] ...
	I0826 04:21:37.444591    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 946570daf38c"
	I0826 04:21:37.459128    4157 logs.go:123] Gathering logs for coredns [ae40de6d158d] ...
	I0826 04:21:37.459138    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae40de6d158d"
	I0826 04:21:37.470424    4157 logs.go:123] Gathering logs for coredns [a290dbe19bc7] ...
	I0826 04:21:37.470435    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a290dbe19bc7"
	I0826 04:21:37.482076    4157 logs.go:123] Gathering logs for Docker ...
	I0826 04:21:37.482088    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:21:37.507287    4157 logs.go:123] Gathering logs for dmesg ...
	I0826 04:21:37.507296    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:21:37.512095    4157 logs.go:123] Gathering logs for etcd [b708c77ab1a7] ...
	I0826 04:21:37.512104    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b708c77ab1a7"
	I0826 04:21:37.526417    4157 logs.go:123] Gathering logs for kube-scheduler [65d8fa7f5c50] ...
	I0826 04:21:37.526426    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d8fa7f5c50"
	I0826 04:21:37.540985    4157 logs.go:123] Gathering logs for kube-proxy [893784fae7df] ...
	I0826 04:21:37.540995    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 893784fae7df"
	I0826 04:21:37.552963    4157 logs.go:123] Gathering logs for container status ...
	I0826 04:21:37.552974    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:21:37.564480    4157 logs.go:123] Gathering logs for kubelet ...
	I0826 04:21:37.564496    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0826 04:21:37.596126    4157 logs.go:138] Found kubelet problem: Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: W0826 11:18:31.345565   14285 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	W0826 04:21:37.596220    4157 logs.go:138] Found kubelet problem: Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: E0826 11:18:31.345585   14285 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	I0826 04:21:37.598490    4157 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:21:37.598495    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:21:37.634371    4157 logs.go:123] Gathering logs for coredns [7fe198277c67] ...
	I0826 04:21:37.634382    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fe198277c67"
	I0826 04:21:37.646799    4157 logs.go:123] Gathering logs for coredns [93db8db9c2e3] ...
	I0826 04:21:37.646810    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93db8db9c2e3"
	I0826 04:21:37.658705    4157 logs.go:123] Gathering logs for kube-controller-manager [00731d6626be] ...
	I0826 04:21:37.658716    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00731d6626be"
	I0826 04:21:37.675947    4157 out.go:358] Setting ErrFile to fd 2...
	I0826 04:21:37.675957    4157 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0826 04:21:37.675983    4157 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0826 04:21:37.675988    4157 out.go:270]   Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: W0826 11:18:31.345565   14285 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	  Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: W0826 11:18:31.345565   14285 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	W0826 04:21:37.675991    4157 out.go:270]   Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: E0826 11:18:31.345585   14285 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	  Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: E0826 11:18:31.345585   14285 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	I0826 04:21:37.675994    4157 out.go:358] Setting ErrFile to fd 2...
	I0826 04:21:37.675996    4157 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:21:47.677400    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:21:52.678905    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:21:52.679111    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:21:52.708197    4157 logs.go:276] 1 containers: [946570daf38c]
	I0826 04:21:52.708297    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:21:52.723676    4157 logs.go:276] 1 containers: [b708c77ab1a7]
	I0826 04:21:52.723756    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:21:52.736361    4157 logs.go:276] 4 containers: [7fe198277c67 ae40de6d158d a290dbe19bc7 93db8db9c2e3]
	I0826 04:21:52.736436    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:21:52.746919    4157 logs.go:276] 1 containers: [65d8fa7f5c50]
	I0826 04:21:52.746992    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:21:52.763337    4157 logs.go:276] 1 containers: [893784fae7df]
	I0826 04:21:52.763405    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:21:52.773700    4157 logs.go:276] 1 containers: [00731d6626be]
	I0826 04:21:52.773766    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:21:52.783920    4157 logs.go:276] 0 containers: []
	W0826 04:21:52.783929    4157 logs.go:278] No container was found matching "kindnet"
	I0826 04:21:52.783984    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:21:52.794696    4157 logs.go:276] 1 containers: [cea2a531fea7]
	I0826 04:21:52.794714    4157 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:21:52.794719    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:21:52.836212    4157 logs.go:123] Gathering logs for coredns [ae40de6d158d] ...
	I0826 04:21:52.836226    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae40de6d158d"
	I0826 04:21:52.848103    4157 logs.go:123] Gathering logs for kube-scheduler [65d8fa7f5c50] ...
	I0826 04:21:52.848116    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d8fa7f5c50"
	I0826 04:21:52.862702    4157 logs.go:123] Gathering logs for kube-proxy [893784fae7df] ...
	I0826 04:21:52.862716    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 893784fae7df"
	I0826 04:21:52.876700    4157 logs.go:123] Gathering logs for kube-controller-manager [00731d6626be] ...
	I0826 04:21:52.876711    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00731d6626be"
	I0826 04:21:52.894430    4157 logs.go:123] Gathering logs for storage-provisioner [cea2a531fea7] ...
	I0826 04:21:52.894441    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cea2a531fea7"
	I0826 04:21:52.905888    4157 logs.go:123] Gathering logs for etcd [b708c77ab1a7] ...
	I0826 04:21:52.905901    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b708c77ab1a7"
	I0826 04:21:52.919809    4157 logs.go:123] Gathering logs for coredns [7fe198277c67] ...
	I0826 04:21:52.919820    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fe198277c67"
	I0826 04:21:52.931264    4157 logs.go:123] Gathering logs for Docker ...
	I0826 04:21:52.931274    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:21:52.955996    4157 logs.go:123] Gathering logs for coredns [a290dbe19bc7] ...
	I0826 04:21:52.956003    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a290dbe19bc7"
	I0826 04:21:52.967585    4157 logs.go:123] Gathering logs for coredns [93db8db9c2e3] ...
	I0826 04:21:52.967595    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93db8db9c2e3"
	I0826 04:21:52.978792    4157 logs.go:123] Gathering logs for kubelet ...
	I0826 04:21:52.978804    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0826 04:21:53.010069    4157 logs.go:138] Found kubelet problem: Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: W0826 11:18:31.345565   14285 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	W0826 04:21:53.010164    4157 logs.go:138] Found kubelet problem: Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: E0826 11:18:31.345585   14285 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	I0826 04:21:53.012532    4157 logs.go:123] Gathering logs for dmesg ...
	I0826 04:21:53.012538    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:21:53.017410    4157 logs.go:123] Gathering logs for kube-apiserver [946570daf38c] ...
	I0826 04:21:53.017416    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 946570daf38c"
	I0826 04:21:53.031904    4157 logs.go:123] Gathering logs for container status ...
	I0826 04:21:53.031915    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:21:53.043761    4157 out.go:358] Setting ErrFile to fd 2...
	I0826 04:21:53.043774    4157 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0826 04:21:53.043800    4157 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0826 04:21:53.043805    4157 out.go:270]   Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: W0826 11:18:31.345565   14285 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	  Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: W0826 11:18:31.345565   14285 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	W0826 04:21:53.043817    4157 out.go:270]   Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: E0826 11:18:31.345585   14285 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	  Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: E0826 11:18:31.345585   14285 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	I0826 04:21:53.043821    4157 out.go:358] Setting ErrFile to fd 2...
	I0826 04:21:53.043825    4157 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:22:03.046946    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:22:08.049217    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:22:08.049399    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:22:08.071148    4157 logs.go:276] 1 containers: [946570daf38c]
	I0826 04:22:08.071263    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:22:08.086435    4157 logs.go:276] 1 containers: [b708c77ab1a7]
	I0826 04:22:08.086516    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:22:08.099444    4157 logs.go:276] 4 containers: [7fe198277c67 ae40de6d158d a290dbe19bc7 93db8db9c2e3]
	I0826 04:22:08.099515    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:22:08.110719    4157 logs.go:276] 1 containers: [65d8fa7f5c50]
	I0826 04:22:08.110797    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:22:08.121052    4157 logs.go:276] 1 containers: [893784fae7df]
	I0826 04:22:08.121119    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:22:08.131893    4157 logs.go:276] 1 containers: [00731d6626be]
	I0826 04:22:08.131958    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:22:08.150596    4157 logs.go:276] 0 containers: []
	W0826 04:22:08.150610    4157 logs.go:278] No container was found matching "kindnet"
	I0826 04:22:08.150670    4157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:22:08.160744    4157 logs.go:276] 1 containers: [cea2a531fea7]
	I0826 04:22:08.160768    4157 logs.go:123] Gathering logs for coredns [93db8db9c2e3] ...
	I0826 04:22:08.160776    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93db8db9c2e3"
	I0826 04:22:08.172415    4157 logs.go:123] Gathering logs for container status ...
	I0826 04:22:08.172428    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:22:08.185631    4157 logs.go:123] Gathering logs for kubelet ...
	I0826 04:22:08.185644    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0826 04:22:08.218199    4157 logs.go:138] Found kubelet problem: Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: W0826 11:18:31.345565   14285 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	W0826 04:22:08.218304    4157 logs.go:138] Found kubelet problem: Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: E0826 11:18:31.345585   14285 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	I0826 04:22:08.220736    4157 logs.go:123] Gathering logs for kube-apiserver [946570daf38c] ...
	I0826 04:22:08.220748    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 946570daf38c"
	I0826 04:22:08.251943    4157 logs.go:123] Gathering logs for coredns [ae40de6d158d] ...
	I0826 04:22:08.251955    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae40de6d158d"
	I0826 04:22:08.263632    4157 logs.go:123] Gathering logs for dmesg ...
	I0826 04:22:08.263643    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:22:08.268211    4157 logs.go:123] Gathering logs for kube-proxy [893784fae7df] ...
	I0826 04:22:08.268218    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 893784fae7df"
	I0826 04:22:08.280414    4157 logs.go:123] Gathering logs for kube-controller-manager [00731d6626be] ...
	I0826 04:22:08.280424    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00731d6626be"
	I0826 04:22:08.307541    4157 logs.go:123] Gathering logs for storage-provisioner [cea2a531fea7] ...
	I0826 04:22:08.307557    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cea2a531fea7"
	I0826 04:22:08.318794    4157 logs.go:123] Gathering logs for Docker ...
	I0826 04:22:08.318806    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:22:08.342335    4157 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:22:08.342344    4157 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:22:08.377582    4157 logs.go:123] Gathering logs for etcd [b708c77ab1a7] ...
	I0826 04:22:08.377595    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b708c77ab1a7"
	I0826 04:22:08.391964    4157 logs.go:123] Gathering logs for coredns [7fe198277c67] ...
	I0826 04:22:08.391975    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fe198277c67"
	I0826 04:22:08.407688    4157 logs.go:123] Gathering logs for coredns [a290dbe19bc7] ...
	I0826 04:22:08.407700    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a290dbe19bc7"
	I0826 04:22:08.419760    4157 logs.go:123] Gathering logs for kube-scheduler [65d8fa7f5c50] ...
	I0826 04:22:08.419771    4157 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d8fa7f5c50"
	I0826 04:22:08.435199    4157 out.go:358] Setting ErrFile to fd 2...
	I0826 04:22:08.435209    4157 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0826 04:22:08.435232    4157 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0826 04:22:08.435236    4157 out.go:270]   Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: W0826 11:18:31.345565   14285 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	  Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: W0826 11:18:31.345565   14285 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	W0826 04:22:08.435239    4157 out.go:270]   Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: E0826 11:18:31.345585   14285 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	  Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: E0826 11:18:31.345585   14285 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-798000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-798000' and this object
	I0826 04:22:08.435242    4157 out.go:358] Setting ErrFile to fd 2...
	I0826 04:22:08.435245    4157 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:22:18.438829    4157 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:22:23.440804    4157 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:22:23.444805    4157 out.go:201] 
	W0826 04:22:23.448942    4157 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0826 04:22:23.448950    4157 out.go:270] * 
	* 
	W0826 04:22:23.449540    4157 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0826 04:22:23.460914    4157 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-798000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-08-26 04:22:23.553807 -0700 PDT m=+2871.220108876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-798000 -n running-upgrade-798000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-798000 -n running-upgrade-798000: exit status 2 (15.722114459s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-798000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-336000 sudo cat                            | cilium-336000             | jenkins | v1.33.1 | 26 Aug 24 04:11 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-336000 sudo cat                            | cilium-336000             | jenkins | v1.33.1 | 26 Aug 24 04:11 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-336000 sudo                                | cilium-336000             | jenkins | v1.33.1 | 26 Aug 24 04:11 PDT |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-336000 sudo                                | cilium-336000             | jenkins | v1.33.1 | 26 Aug 24 04:11 PDT |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-336000 sudo                                | cilium-336000             | jenkins | v1.33.1 | 26 Aug 24 04:11 PDT |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-336000 sudo cat                            | cilium-336000             | jenkins | v1.33.1 | 26 Aug 24 04:11 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-336000 sudo cat                            | cilium-336000             | jenkins | v1.33.1 | 26 Aug 24 04:11 PDT |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-336000 sudo                                | cilium-336000             | jenkins | v1.33.1 | 26 Aug 24 04:11 PDT |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-336000 sudo                                | cilium-336000             | jenkins | v1.33.1 | 26 Aug 24 04:11 PDT |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-336000 sudo                                | cilium-336000             | jenkins | v1.33.1 | 26 Aug 24 04:11 PDT |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-336000 sudo find                           | cilium-336000             | jenkins | v1.33.1 | 26 Aug 24 04:11 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-336000 sudo crio                           | cilium-336000             | jenkins | v1.33.1 | 26 Aug 24 04:11 PDT |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-336000                                     | cilium-336000             | jenkins | v1.33.1 | 26 Aug 24 04:11 PDT | 26 Aug 24 04:11 PDT |
	| start   | -p kubernetes-upgrade-759000                         | kubernetes-upgrade-759000 | jenkins | v1.33.1 | 26 Aug 24 04:11 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p offline-docker-572000                             | offline-docker-572000     | jenkins | v1.33.1 | 26 Aug 24 04:11 PDT | 26 Aug 24 04:11 PDT |
	| start   | -p stopped-upgrade-743000                            | minikube                  | jenkins | v1.26.0 | 26 Aug 24 04:12 PDT | 26 Aug 24 04:13 PDT |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                                    |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-759000                         | kubernetes-upgrade-759000 | jenkins | v1.33.1 | 26 Aug 24 04:12 PDT | 26 Aug 24 04:12 PDT |
	| start   | -p kubernetes-upgrade-759000                         | kubernetes-upgrade-759000 | jenkins | v1.33.1 | 26 Aug 24 04:12 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-759000                         | kubernetes-upgrade-759000 | jenkins | v1.33.1 | 26 Aug 24 04:12 PDT | 26 Aug 24 04:12 PDT |
	| start   | -p running-upgrade-798000                            | minikube                  | jenkins | v1.26.0 | 26 Aug 24 04:12 PDT | 26 Aug 24 04:13 PDT |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                                    |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-743000 stop                          | minikube                  | jenkins | v1.26.0 | 26 Aug 24 04:13 PDT | 26 Aug 24 04:13 PDT |
	| start   | -p stopped-upgrade-743000                            | stopped-upgrade-743000    | jenkins | v1.33.1 | 26 Aug 24 04:13 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-798000                            | running-upgrade-798000    | jenkins | v1.33.1 | 26 Aug 24 04:13 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-743000                            | stopped-upgrade-743000    | jenkins | v1.33.1 | 26 Aug 24 04:22 PDT | 26 Aug 24 04:22 PDT |
	| start   | -p pause-607000 --memory=2048                        | pause-607000              | jenkins | v1.33.1 | 26 Aug 24 04:22 PDT |                     |
	|         | --install-addons=false                               |                           |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                            |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/26 04:22:30
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0826 04:22:30.084168    4364 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:22:30.084326    4364 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:22:30.084328    4364 out.go:358] Setting ErrFile to fd 2...
	I0826 04:22:30.084329    4364 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:22:30.084468    4364 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:22:30.085552    4364 out.go:352] Setting JSON to false
	I0826 04:22:30.102595    4364 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3113,"bootTime":1724668237,"procs":485,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0826 04:22:30.102664    4364 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0826 04:22:30.106481    4364 out.go:177] * [pause-607000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0826 04:22:30.113537    4364 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 04:22:30.113566    4364 notify.go:220] Checking for updates...
	I0826 04:22:30.120569    4364 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	I0826 04:22:30.121885    4364 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0826 04:22:30.125567    4364 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 04:22:30.128467    4364 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	I0826 04:22:30.131580    4364 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 04:22:30.134796    4364 config.go:182] Loaded profile config "multinode-143000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:22:30.134877    4364 config.go:182] Loaded profile config "running-upgrade-798000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0826 04:22:30.134928    4364 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 04:22:30.139562    4364 out.go:177] * Using the qemu2 driver based on user configuration
	I0826 04:22:30.146429    4364 start.go:297] selected driver: qemu2
	I0826 04:22:30.146433    4364 start.go:901] validating driver "qemu2" against <nil>
	I0826 04:22:30.146438    4364 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 04:22:30.148832    4364 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0826 04:22:30.152475    4364 out.go:177] * Automatically selected the socket_vmnet network
	I0826 04:22:30.155581    4364 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 04:22:30.155594    4364 cni.go:84] Creating CNI manager for ""
	I0826 04:22:30.155600    4364 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0826 04:22:30.155603    4364 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0826 04:22:30.155631    4364 start.go:340] cluster config:
	{Name:pause-607000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:pause-607000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_cli
ent SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 04:22:30.159323    4364 iso.go:125] acquiring lock: {Name:mk859bee1c7de58c8a10e75b01bd87b0e1e74bdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:22:30.166546    4364 out.go:177] * Starting "pause-607000" primary control-plane node in "pause-607000" cluster
	I0826 04:22:30.170438    4364 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0826 04:22:30.170450    4364 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0826 04:22:30.170457    4364 cache.go:56] Caching tarball of preloaded images
	I0826 04:22:30.170513    4364 preload.go:172] Found /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0826 04:22:30.170517    4364 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0826 04:22:30.170581    4364 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/pause-607000/config.json ...
	I0826 04:22:30.170590    4364 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/pause-607000/config.json: {Name:mk87012962aefa5ed1e93b18356692e1cb28ec18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 04:22:30.171053    4364 start.go:360] acquireMachinesLock for pause-607000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:22:30.171083    4364 start.go:364] duration metric: took 26.333µs to acquireMachinesLock for "pause-607000"
	I0826 04:22:30.171092    4364 start.go:93] Provisioning new machine with config: &{Name:pause-607000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:pause-607000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0826 04:22:30.171116    4364 start.go:125] createHost starting for "" (driver="qemu2")
	I0826 04:22:30.177486    4364 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0826 04:22:30.199684    4364 start.go:159] libmachine.API.Create for "pause-607000" (driver="qemu2")
	I0826 04:22:30.199714    4364 client.go:168] LocalClient.Create starting
	I0826 04:22:30.199816    4364 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem
	I0826 04:22:30.199852    4364 main.go:141] libmachine: Decoding PEM data...
	I0826 04:22:30.199863    4364 main.go:141] libmachine: Parsing certificate...
	I0826 04:22:30.199898    4364 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/cert.pem
	I0826 04:22:30.199921    4364 main.go:141] libmachine: Decoding PEM data...
	I0826 04:22:30.199930    4364 main.go:141] libmachine: Parsing certificate...
	I0826 04:22:30.200275    4364 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19501-1045/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0826 04:22:30.373417    4364 main.go:141] libmachine: Creating SSH key...
	I0826 04:22:30.445390    4364 main.go:141] libmachine: Creating Disk image...
	I0826 04:22:30.445394    4364 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0826 04:22:30.445567    4364 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/pause-607000/disk.qcow2.raw /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/pause-607000/disk.qcow2
	I0826 04:22:30.455744    4364 main.go:141] libmachine: STDOUT: 
	I0826 04:22:30.455763    4364 main.go:141] libmachine: STDERR: 
	I0826 04:22:30.455811    4364 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/pause-607000/disk.qcow2 +20000M
	I0826 04:22:30.464450    4364 main.go:141] libmachine: STDOUT: Image resized.
	
	I0826 04:22:30.464475    4364 main.go:141] libmachine: STDERR: 
	I0826 04:22:30.464493    4364 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/pause-607000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/pause-607000/disk.qcow2
	I0826 04:22:30.464514    4364 main.go:141] libmachine: Starting QEMU VM...
	I0826 04:22:30.464521    4364 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:22:30.464551    4364 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/pause-607000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/pause-607000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/pause-607000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:68:a2:ec:bf:d2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/pause-607000/disk.qcow2
	I0826 04:22:30.466448    4364 main.go:141] libmachine: STDOUT: 
	I0826 04:22:30.466462    4364 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:22:30.466481    4364 client.go:171] duration metric: took 266.766791ms to LocalClient.Create
	I0826 04:22:32.468507    4364 start.go:128] duration metric: took 2.297457292s to createHost
	I0826 04:22:32.468541    4364 start.go:83] releasing machines lock for "pause-607000", held for 2.297533583s
	W0826 04:22:32.468581    4364 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:22:32.478125    4364 out.go:177] * Deleting "pause-607000" in qemu2 ...
	W0826 04:22:32.496485    4364 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:22:32.496491    4364 start.go:729] Will try again in 5 seconds ...
	I0826 04:22:37.498484    4364 start.go:360] acquireMachinesLock for pause-607000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:22:37.498702    4364 start.go:364] duration metric: took 185.542µs to acquireMachinesLock for "pause-607000"
	I0826 04:22:37.498727    4364 start.go:93] Provisioning new machine with config: &{Name:pause-607000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:pause-607000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0826 04:22:37.498870    4364 start.go:125] createHost starting for "" (driver="qemu2")
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-08-26 11:12:48 UTC, ends at Mon 2024-08-26 11:22:39 UTC. --
	Aug 26 11:22:20 running-upgrade-798000 dockerd[4448]: time="2024-08-26T11:22:20.446441518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 26 11:22:20 running-upgrade-798000 dockerd[4448]: time="2024-08-26T11:22:20.446452726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 26 11:22:20 running-upgrade-798000 dockerd[4448]: time="2024-08-26T11:22:20.446508224Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/da051a8a8058d6197134ab948b9bbd35bf0cc7f3183e9223e36492f6527d70d9 pid=19231 runtime=io.containerd.runc.v2
	Aug 26 11:22:21 running-upgrade-798000 cri-dockerd[4205]: time="2024-08-26T11:22:21Z" level=error msg="ContainerStats resp: {0x400067a6c0 linux}"
	Aug 26 11:22:22 running-upgrade-798000 cri-dockerd[4205]: time="2024-08-26T11:22:22Z" level=error msg="ContainerStats resp: {0x400089e5c0 linux}"
	Aug 26 11:22:22 running-upgrade-798000 cri-dockerd[4205]: time="2024-08-26T11:22:22Z" level=error msg="ContainerStats resp: {0x40008fd800 linux}"
	Aug 26 11:22:22 running-upgrade-798000 cri-dockerd[4205]: time="2024-08-26T11:22:22Z" level=error msg="ContainerStats resp: {0x40009726c0 linux}"
	Aug 26 11:22:22 running-upgrade-798000 cri-dockerd[4205]: time="2024-08-26T11:22:22Z" level=error msg="ContainerStats resp: {0x4000972840 linux}"
	Aug 26 11:22:22 running-upgrade-798000 cri-dockerd[4205]: time="2024-08-26T11:22:22Z" level=error msg="ContainerStats resp: {0x40009b66c0 linux}"
	Aug 26 11:22:22 running-upgrade-798000 cri-dockerd[4205]: time="2024-08-26T11:22:22Z" level=error msg="ContainerStats resp: {0x40009b6800 linux}"
	Aug 26 11:22:22 running-upgrade-798000 cri-dockerd[4205]: time="2024-08-26T11:22:22Z" level=error msg="ContainerStats resp: {0x40009732c0 linux}"
	Aug 26 11:22:24 running-upgrade-798000 cri-dockerd[4205]: time="2024-08-26T11:22:24Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 26 11:22:29 running-upgrade-798000 cri-dockerd[4205]: time="2024-08-26T11:22:29Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 26 11:22:32 running-upgrade-798000 cri-dockerd[4205]: time="2024-08-26T11:22:32Z" level=error msg="ContainerStats resp: {0x400067b840 linux}"
	Aug 26 11:22:32 running-upgrade-798000 cri-dockerd[4205]: time="2024-08-26T11:22:32Z" level=error msg="ContainerStats resp: {0x400067a040 linux}"
	Aug 26 11:22:33 running-upgrade-798000 cri-dockerd[4205]: time="2024-08-26T11:22:33Z" level=error msg="ContainerStats resp: {0x40008fd900 linux}"
	Aug 26 11:22:34 running-upgrade-798000 cri-dockerd[4205]: time="2024-08-26T11:22:34Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 26 11:22:34 running-upgrade-798000 cri-dockerd[4205]: time="2024-08-26T11:22:34Z" level=error msg="ContainerStats resp: {0x40009da880 linux}"
	Aug 26 11:22:34 running-upgrade-798000 cri-dockerd[4205]: time="2024-08-26T11:22:34Z" level=error msg="ContainerStats resp: {0x4000754bc0 linux}"
	Aug 26 11:22:34 running-upgrade-798000 cri-dockerd[4205]: time="2024-08-26T11:22:34Z" level=error msg="ContainerStats resp: {0x40009daf40 linux}"
	Aug 26 11:22:34 running-upgrade-798000 cri-dockerd[4205]: time="2024-08-26T11:22:34Z" level=error msg="ContainerStats resp: {0x4000755680 linux}"
	Aug 26 11:22:34 running-upgrade-798000 cri-dockerd[4205]: time="2024-08-26T11:22:34Z" level=error msg="ContainerStats resp: {0x40009dba40 linux}"
	Aug 26 11:22:34 running-upgrade-798000 cri-dockerd[4205]: time="2024-08-26T11:22:34Z" level=error msg="ContainerStats resp: {0x40005ca040 linux}"
	Aug 26 11:22:34 running-upgrade-798000 cri-dockerd[4205]: time="2024-08-26T11:22:34Z" level=error msg="ContainerStats resp: {0x4000478180 linux}"
	Aug 26 11:22:39 running-upgrade-798000 cri-dockerd[4205]: time="2024-08-26T11:22:39Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	03b047d4592a8       edaa71f2aee88       19 seconds ago      Running             coredns                   2                   dd3d96988f8eb
	da051a8a8058d       edaa71f2aee88       19 seconds ago      Running             coredns                   2                   16a342419a262
	7fe198277c67a       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   16a342419a262
	ae40de6d158d9       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   dd3d96988f8eb
	893784fae7df1       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   9f0234c9b93d5
	cea2a531fea71       ba04bb24b9575       4 minutes ago       Running             storage-provisioner       0                   c113b5c3f67bd
	b708c77ab1a7b       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   d0d0a7582880d
	00731d6626be3       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   410455ea6054a
	946570daf38c3       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   9eeb0f80c1ea3
	65d8fa7f5c506       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   8f7248a46fea3
	
	
	==> coredns [03b047d4592a] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 9131319461727159612.6312003140523928060. HINFO: read udp 10.244.0.3:35612->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9131319461727159612.6312003140523928060. HINFO: read udp 10.244.0.3:43841->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9131319461727159612.6312003140523928060. HINFO: read udp 10.244.0.3:44552->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9131319461727159612.6312003140523928060. HINFO: read udp 10.244.0.3:43909->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9131319461727159612.6312003140523928060. HINFO: read udp 10.244.0.3:51775->10.0.2.3:53: i/o timeout
	
	
	==> coredns [7fe198277c67] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5224352342746858369.5485557464801072909. HINFO: read udp 10.244.0.2:43689->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5224352342746858369.5485557464801072909. HINFO: read udp 10.244.0.2:38176->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5224352342746858369.5485557464801072909. HINFO: read udp 10.244.0.2:51344->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5224352342746858369.5485557464801072909. HINFO: read udp 10.244.0.2:38014->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5224352342746858369.5485557464801072909. HINFO: read udp 10.244.0.2:45423->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5224352342746858369.5485557464801072909. HINFO: read udp 10.244.0.2:41998->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5224352342746858369.5485557464801072909. HINFO: read udp 10.244.0.2:47194->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5224352342746858369.5485557464801072909. HINFO: read udp 10.244.0.2:40812->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5224352342746858369.5485557464801072909. HINFO: read udp 10.244.0.2:36175->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5224352342746858369.5485557464801072909. HINFO: read udp 10.244.0.2:36197->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ae40de6d158d] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5713943317629188172.5480949754461050928. HINFO: read udp 10.244.0.3:38813->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5713943317629188172.5480949754461050928. HINFO: read udp 10.244.0.3:56531->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5713943317629188172.5480949754461050928. HINFO: read udp 10.244.0.3:46275->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5713943317629188172.5480949754461050928. HINFO: read udp 10.244.0.3:48651->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5713943317629188172.5480949754461050928. HINFO: read udp 10.244.0.3:45284->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5713943317629188172.5480949754461050928. HINFO: read udp 10.244.0.3:55818->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5713943317629188172.5480949754461050928. HINFO: read udp 10.244.0.3:59282->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5713943317629188172.5480949754461050928. HINFO: read udp 10.244.0.3:39025->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5713943317629188172.5480949754461050928. HINFO: read udp 10.244.0.3:41255->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5713943317629188172.5480949754461050928. HINFO: read udp 10.244.0.3:40917->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [da051a8a8058] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 9184065857813339032.7937154983500669488. HINFO: read udp 10.244.0.2:40859->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9184065857813339032.7937154983500669488. HINFO: read udp 10.244.0.2:58505->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9184065857813339032.7937154983500669488. HINFO: read udp 10.244.0.2:43752->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9184065857813339032.7937154983500669488. HINFO: read udp 10.244.0.2:43978->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9184065857813339032.7937154983500669488. HINFO: read udp 10.244.0.2:60437->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-798000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-798000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff
	                    minikube.k8s.io/name=running-upgrade-798000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_26T04_18_18_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Aug 2024 11:18:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-798000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Aug 2024 11:22:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Aug 2024 11:18:18 +0000   Mon, 26 Aug 2024 11:18:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Aug 2024 11:18:18 +0000   Mon, 26 Aug 2024 11:18:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Aug 2024 11:18:18 +0000   Mon, 26 Aug 2024 11:18:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Aug 2024 11:18:18 +0000   Mon, 26 Aug 2024 11:18:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-798000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 ee37aa7553694d6284712c55a2210942
	  System UUID:                ee37aa7553694d6284712c55a2210942
	  Boot ID:                    092c72ef-82fb-49ee-92f8-778f7f5f1dde
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-9m6df                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m7s
	  kube-system                 coredns-6d4b75cb6d-bjn9k                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m7s
	  kube-system                 etcd-running-upgrade-798000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m22s
	  kube-system                 kube-apiserver-running-upgrade-798000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 kube-controller-manager-running-upgrade-798000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m22s
	  kube-system                 kube-proxy-gslkx                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-scheduler-running-upgrade-798000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m6s   kube-proxy       
	  Normal  NodeReady                4m21s  kubelet          Node running-upgrade-798000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m21s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m21s  kubelet          Node running-upgrade-798000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m21s  kubelet          Node running-upgrade-798000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m21s  kubelet          Node running-upgrade-798000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m21s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m8s   node-controller  Node running-upgrade-798000 event: Registered Node running-upgrade-798000 in Controller
	
	
	==> dmesg <==
	[  +0.080493] systemd-fstab-generator[887]: Ignoring "noauto" for root device
	[  +0.082812] systemd-fstab-generator[898]: Ignoring "noauto" for root device
	[  +1.110873] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.099170] systemd-fstab-generator[1050]: Ignoring "noauto" for root device
	[  +0.091433] systemd-fstab-generator[1061]: Ignoring "noauto" for root device
	[  +2.342017] systemd-fstab-generator[1289]: Ignoring "noauto" for root device
	[  +8.635335] systemd-fstab-generator[1930]: Ignoring "noauto" for root device
	[ +14.177751] kauditd_printk_skb: 47 callbacks suppressed
	[  +8.798171] systemd-fstab-generator[2697]: Ignoring "noauto" for root device
	[  +0.163605] systemd-fstab-generator[2731]: Ignoring "noauto" for root device
	[  +0.095863] systemd-fstab-generator[2742]: Ignoring "noauto" for root device
	[  +0.103948] systemd-fstab-generator[2755]: Ignoring "noauto" for root device
	[  +0.263973] kauditd_printk_skb: 13 callbacks suppressed
	[ +21.450384] systemd-fstab-generator[4159]: Ignoring "noauto" for root device
	[  +0.095786] systemd-fstab-generator[4173]: Ignoring "noauto" for root device
	[  +0.087719] systemd-fstab-generator[4184]: Ignoring "noauto" for root device
	[  +0.090396] systemd-fstab-generator[4198]: Ignoring "noauto" for root device
	[Aug26 11:14] systemd-fstab-generator[4434]: Ignoring "noauto" for root device
	[  +2.384537] systemd-fstab-generator[4793]: Ignoring "noauto" for root device
	[  +0.960933] systemd-fstab-generator[4938]: Ignoring "noauto" for root device
	[  +3.578172] kauditd_printk_skb: 77 callbacks suppressed
	[  +5.052891] kauditd_printk_skb: 1 callbacks suppressed
	[Aug26 11:18] systemd-fstab-generator[13681]: Ignoring "noauto" for root device
	[  +5.650348] systemd-fstab-generator[14279]: Ignoring "noauto" for root device
	[  +0.465362] systemd-fstab-generator[14426]: Ignoring "noauto" for root device
	
	
	==> etcd [b708c77ab1a7] <==
	{"level":"info","ts":"2024-08-26T11:18:14.197Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-08-26T11:18:14.200Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-08-26T11:18:14.200Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-26T11:18:14.200Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-08-26T11:18:14.200Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-08-26T11:18:14.200Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-26T11:18:14.200Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-26T11:18:14.988Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-26T11:18:14.988Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-26T11:18:14.988Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-08-26T11:18:14.988Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-08-26T11:18:14.988Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-08-26T11:18:14.988Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-08-26T11:18:14.988Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-08-26T11:18:14.988Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-798000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-26T11:18:14.988Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-26T11:18:14.989Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-08-26T11:18:14.989Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-26T11:18:14.989Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-26T11:18:14.990Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-26T11:18:14.990Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-26T11:18:14.990Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-26T11:18:14.993Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-26T11:18:14.993Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-26T11:18:14.993Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 11:22:39 up 9 min,  0 users,  load average: 0.13, 0.34, 0.24
	Linux running-upgrade-798000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [946570daf38c] <==
	I0826 11:18:16.200355       1 controller.go:611] quota admission added evaluator for: namespaces
	I0826 11:18:16.231161       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0826 11:18:16.232282       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0826 11:18:16.232318       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0826 11:18:16.239948       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0826 11:18:16.240018       1 cache.go:39] Caches are synced for autoregister controller
	I0826 11:18:16.246845       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0826 11:18:16.969686       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0826 11:18:17.137922       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0826 11:18:17.141194       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0826 11:18:17.141219       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0826 11:18:17.280593       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0826 11:18:17.290832       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0826 11:18:17.304594       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0826 11:18:17.306258       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0826 11:18:17.306648       1 controller.go:611] quota admission added evaluator for: endpoints
	I0826 11:18:17.308210       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0826 11:18:18.302815       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0826 11:18:18.755494       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0826 11:18:18.759788       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0826 11:18:18.772911       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0826 11:18:18.811866       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0826 11:18:31.339783       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0826 11:18:31.439042       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0826 11:18:33.696000       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [00731d6626be] <==
	I0826 11:18:31.288540       1 shared_informer.go:262] Caches are synced for taint
	I0826 11:18:31.288568       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0826 11:18:31.288600       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-798000. Assuming now as a timestamp.
	I0826 11:18:31.288727       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0826 11:18:31.288856       1 shared_informer.go:262] Caches are synced for TTL
	I0826 11:18:31.289484       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0826 11:18:31.290492       1 event.go:294] "Event occurred" object="running-upgrade-798000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-798000 event: Registered Node running-upgrade-798000 in Controller"
	I0826 11:18:31.343036       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-gslkx"
	I0826 11:18:31.380457       1 shared_informer.go:262] Caches are synced for cronjob
	I0826 11:18:31.387466       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0826 11:18:31.388615       1 shared_informer.go:262] Caches are synced for crt configmap
	I0826 11:18:31.415414       1 shared_informer.go:262] Caches are synced for resource quota
	I0826 11:18:31.418576       1 shared_informer.go:262] Caches are synced for PVC protection
	I0826 11:18:31.420716       1 shared_informer.go:262] Caches are synced for ephemeral
	I0826 11:18:31.440239       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0826 11:18:31.454734       1 shared_informer.go:262] Caches are synced for expand
	I0826 11:18:31.478714       1 shared_informer.go:262] Caches are synced for stateful set
	I0826 11:18:31.490112       1 shared_informer.go:262] Caches are synced for resource quota
	I0826 11:18:31.490153       1 shared_informer.go:262] Caches are synced for attach detach
	I0826 11:18:31.538963       1 shared_informer.go:262] Caches are synced for persistent volume
	I0826 11:18:31.905079       1 shared_informer.go:262] Caches are synced for garbage collector
	I0826 11:18:31.966648       1 shared_informer.go:262] Caches are synced for garbage collector
	I0826 11:18:31.966663       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0826 11:18:32.290901       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-bjn9k"
	I0826 11:18:32.293481       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-9m6df"
	
	
	==> kube-proxy [893784fae7df] <==
	I0826 11:18:33.684413       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0826 11:18:33.684445       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0826 11:18:33.684459       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0826 11:18:33.693869       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0826 11:18:33.693879       1 server_others.go:206] "Using iptables Proxier"
	I0826 11:18:33.693936       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0826 11:18:33.694167       1 server.go:661] "Version info" version="v1.24.1"
	I0826 11:18:33.694173       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0826 11:18:33.694428       1 config.go:317] "Starting service config controller"
	I0826 11:18:33.694437       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0826 11:18:33.694444       1 config.go:226] "Starting endpoint slice config controller"
	I0826 11:18:33.694446       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0826 11:18:33.695237       1 config.go:444] "Starting node config controller"
	I0826 11:18:33.695273       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0826 11:18:33.794865       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0826 11:18:33.794866       1 shared_informer.go:262] Caches are synced for service config
	I0826 11:18:33.795456       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [65d8fa7f5c50] <==
	W0826 11:18:16.194600       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0826 11:18:16.194622       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0826 11:18:16.195468       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0826 11:18:16.195595       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0826 11:18:16.196178       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0826 11:18:16.196208       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0826 11:18:16.196236       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0826 11:18:16.196450       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0826 11:18:16.196488       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0826 11:18:16.196508       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0826 11:18:16.196601       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0826 11:18:16.196630       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0826 11:18:16.197635       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0826 11:18:16.197673       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0826 11:18:17.007296       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0826 11:18:17.007502       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0826 11:18:17.034828       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0826 11:18:17.034899       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0826 11:18:17.046589       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0826 11:18:17.046609       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0826 11:18:17.149207       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0826 11:18:17.149322       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0826 11:18:17.217966       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0826 11:18:17.218055       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0826 11:18:17.389695       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-08-26 11:12:48 UTC, ends at Mon 2024-08-26 11:22:39 UTC. --
	Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: I0826 11:18:31.416543   14285 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfk7z\" (UniqueName: \"kubernetes.io/projected/ff41fdcf-5ae5-4c36-b438-cbe749322544-kube-api-access-wfk7z\") pod \"storage-provisioner\" (UID: \"ff41fdcf-5ae5-4c36-b438-cbe749322544\") " pod="kube-system/storage-provisioner"
	Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: I0826 11:18:31.516909   14285 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2ac0d16-e5bb-451c-a7cc-fd774d1fbd7b-lib-modules\") pod \"kube-proxy-gslkx\" (UID: \"b2ac0d16-e5bb-451c-a7cc-fd774d1fbd7b\") " pod="kube-system/kube-proxy-gslkx"
	Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: I0826 11:18:31.516939   14285 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b2ac0d16-e5bb-451c-a7cc-fd774d1fbd7b-xtables-lock\") pod \"kube-proxy-gslkx\" (UID: \"b2ac0d16-e5bb-451c-a7cc-fd774d1fbd7b\") " pod="kube-system/kube-proxy-gslkx"
	Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: I0826 11:18:31.516949   14285 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nrw5\" (UniqueName: \"kubernetes.io/projected/b2ac0d16-e5bb-451c-a7cc-fd774d1fbd7b-kube-api-access-2nrw5\") pod \"kube-proxy-gslkx\" (UID: \"b2ac0d16-e5bb-451c-a7cc-fd774d1fbd7b\") " pod="kube-system/kube-proxy-gslkx"
	Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: I0826 11:18:31.516966   14285 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b2ac0d16-e5bb-451c-a7cc-fd774d1fbd7b-kube-proxy\") pod \"kube-proxy-gslkx\" (UID: \"b2ac0d16-e5bb-451c-a7cc-fd774d1fbd7b\") " pod="kube-system/kube-proxy-gslkx"
	Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: E0826 11:18:31.519827   14285 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: E0826 11:18:31.519844   14285 projected.go:192] Error preparing data for projected volume kube-api-access-wfk7z for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: E0826 11:18:31.519876   14285 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/ff41fdcf-5ae5-4c36-b438-cbe749322544-kube-api-access-wfk7z podName:ff41fdcf-5ae5-4c36-b438-cbe749322544 nodeName:}" failed. No retries permitted until 2024-08-26 11:18:32.01986444 +0000 UTC m=+13.275873966 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wfk7z" (UniqueName: "kubernetes.io/projected/ff41fdcf-5ae5-4c36-b438-cbe749322544-kube-api-access-wfk7z") pod "storage-provisioner" (UID: "ff41fdcf-5ae5-4c36-b438-cbe749322544") : configmap "kube-root-ca.crt" not found
	Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: E0826 11:18:31.699355   14285 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: E0826 11:18:31.699377   14285 projected.go:192] Error preparing data for projected volume kube-api-access-2nrw5 for pod kube-system/kube-proxy-gslkx: configmap "kube-root-ca.crt" not found
	Aug 26 11:18:31 running-upgrade-798000 kubelet[14285]: E0826 11:18:31.699412   14285 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/b2ac0d16-e5bb-451c-a7cc-fd774d1fbd7b-kube-api-access-2nrw5 podName:b2ac0d16-e5bb-451c-a7cc-fd774d1fbd7b nodeName:}" failed. No retries permitted until 2024-08-26 11:18:32.19939753 +0000 UTC m=+13.455407056 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2nrw5" (UniqueName: "kubernetes.io/projected/b2ac0d16-e5bb-451c-a7cc-fd774d1fbd7b-kube-api-access-2nrw5") pod "kube-proxy-gslkx" (UID: "b2ac0d16-e5bb-451c-a7cc-fd774d1fbd7b") : configmap "kube-root-ca.crt" not found
	Aug 26 11:18:32 running-upgrade-798000 kubelet[14285]: E0826 11:18:32.120540   14285 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Aug 26 11:18:32 running-upgrade-798000 kubelet[14285]: E0826 11:18:32.120562   14285 projected.go:192] Error preparing data for projected volume kube-api-access-wfk7z for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Aug 26 11:18:32 running-upgrade-798000 kubelet[14285]: E0826 11:18:32.120591   14285 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/ff41fdcf-5ae5-4c36-b438-cbe749322544-kube-api-access-wfk7z podName:ff41fdcf-5ae5-4c36-b438-cbe749322544 nodeName:}" failed. No retries permitted until 2024-08-26 11:18:33.12058181 +0000 UTC m=+14.376591336 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-wfk7z" (UniqueName: "kubernetes.io/projected/ff41fdcf-5ae5-4c36-b438-cbe749322544-kube-api-access-wfk7z") pod "storage-provisioner" (UID: "ff41fdcf-5ae5-4c36-b438-cbe749322544") : configmap "kube-root-ca.crt" not found
	Aug 26 11:18:32 running-upgrade-798000 kubelet[14285]: E0826 11:18:32.220781   14285 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Aug 26 11:18:32 running-upgrade-798000 kubelet[14285]: E0826 11:18:32.220795   14285 projected.go:192] Error preparing data for projected volume kube-api-access-2nrw5 for pod kube-system/kube-proxy-gslkx: configmap "kube-root-ca.crt" not found
	Aug 26 11:18:32 running-upgrade-798000 kubelet[14285]: E0826 11:18:32.220817   14285 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/b2ac0d16-e5bb-451c-a7cc-fd774d1fbd7b-kube-api-access-2nrw5 podName:b2ac0d16-e5bb-451c-a7cc-fd774d1fbd7b nodeName:}" failed. No retries permitted until 2024-08-26 11:18:33.220805655 +0000 UTC m=+14.476815180 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-2nrw5" (UniqueName: "kubernetes.io/projected/b2ac0d16-e5bb-451c-a7cc-fd774d1fbd7b-kube-api-access-2nrw5") pod "kube-proxy-gslkx" (UID: "b2ac0d16-e5bb-451c-a7cc-fd774d1fbd7b") : configmap "kube-root-ca.crt" not found
	Aug 26 11:18:32 running-upgrade-798000 kubelet[14285]: I0826 11:18:32.294346   14285 topology_manager.go:200] "Topology Admit Handler"
	Aug 26 11:18:32 running-upgrade-798000 kubelet[14285]: I0826 11:18:32.297179   14285 topology_manager.go:200] "Topology Admit Handler"
	Aug 26 11:18:32 running-upgrade-798000 kubelet[14285]: I0826 11:18:32.321382   14285 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e19b83b-5da3-4fe3-be77-3f26dbf2be91-config-volume\") pod \"coredns-6d4b75cb6d-9m6df\" (UID: \"4e19b83b-5da3-4fe3-be77-3f26dbf2be91\") " pod="kube-system/coredns-6d4b75cb6d-9m6df"
	Aug 26 11:18:32 running-upgrade-798000 kubelet[14285]: I0826 11:18:32.321415   14285 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrxcb\" (UniqueName: \"kubernetes.io/projected/9b0d48a0-5ff1-4e15-8ec8-95945f0512bf-kube-api-access-qrxcb\") pod \"coredns-6d4b75cb6d-bjn9k\" (UID: \"9b0d48a0-5ff1-4e15-8ec8-95945f0512bf\") " pod="kube-system/coredns-6d4b75cb6d-bjn9k"
	Aug 26 11:18:32 running-upgrade-798000 kubelet[14285]: I0826 11:18:32.321430   14285 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b0d48a0-5ff1-4e15-8ec8-95945f0512bf-config-volume\") pod \"coredns-6d4b75cb6d-bjn9k\" (UID: \"9b0d48a0-5ff1-4e15-8ec8-95945f0512bf\") " pod="kube-system/coredns-6d4b75cb6d-bjn9k"
	Aug 26 11:18:32 running-upgrade-798000 kubelet[14285]: I0826 11:18:32.321440   14285 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlzkq\" (UniqueName: \"kubernetes.io/projected/4e19b83b-5da3-4fe3-be77-3f26dbf2be91-kube-api-access-tlzkq\") pod \"coredns-6d4b75cb6d-9m6df\" (UID: \"4e19b83b-5da3-4fe3-be77-3f26dbf2be91\") " pod="kube-system/coredns-6d4b75cb6d-9m6df"
	Aug 26 11:22:21 running-upgrade-798000 kubelet[14285]: I0826 11:22:21.346479   14285 scope.go:110] "RemoveContainer" containerID="93db8db9c2e318d9425ac13a0d89a71eb98e0f7d395c2320ed50c7a1318108fd"
	Aug 26 11:22:21 running-upgrade-798000 kubelet[14285]: I0826 11:22:21.378396   14285 scope.go:110] "RemoveContainer" containerID="a290dbe19bc7c134c7d03ce4e476318893ce9612483a1456ccc74bf7a77dd147"
	
	
	==> storage-provisioner [cea2a531fea7] <==
	I0826 11:18:33.624296       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0826 11:18:33.631822       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0826 11:18:33.631840       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0826 11:18:33.636574       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0826 11:18:33.637471       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-798000_a0ab6d5a-cac4-4f57-87fc-24f5802fdf46!
	I0826 11:18:33.637617       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1cec7faf-0ccf-4ed3-a726-1c9d43e96dfe", APIVersion:"v1", ResourceVersion:"370", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-798000_a0ab6d5a-cac4-4f57-87fc-24f5802fdf46 became leader
	I0826 11:18:33.740303       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-798000_a0ab6d5a-cac4-4f57-87fc-24f5802fdf46!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-798000 -n running-upgrade-798000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-798000 -n running-upgrade-798000: exit status 2 (15.646920625s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-798000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-798000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-798000
--- FAIL: TestRunningBinaryUpgrade (645.39s)

                                                
                                    
x
+
TestKubernetesUpgrade (19s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-759000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-759000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (10.027372583s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-759000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-759000" primary control-plane node in "kubernetes-upgrade-759000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-759000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:11:51.436646    4050 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:11:51.436781    4050 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:11:51.436786    4050 out.go:358] Setting ErrFile to fd 2...
	I0826 04:11:51.436789    4050 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:11:51.436912    4050 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:11:51.437933    4050 out.go:352] Setting JSON to false
	I0826 04:11:51.453741    4050 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2474,"bootTime":1724668237,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0826 04:11:51.453829    4050 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0826 04:11:51.457561    4050 out.go:177] * [kubernetes-upgrade-759000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0826 04:11:51.466428    4050 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 04:11:51.466473    4050 notify.go:220] Checking for updates...
	I0826 04:11:51.474325    4050 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	I0826 04:11:51.477465    4050 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0826 04:11:51.480491    4050 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 04:11:51.483526    4050 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	I0826 04:11:51.486507    4050 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 04:11:51.489796    4050 config.go:182] Loaded profile config "multinode-143000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:11:51.489861    4050 config.go:182] Loaded profile config "offline-docker-572000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:11:51.489905    4050 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 04:11:51.493541    4050 out.go:177] * Using the qemu2 driver based on user configuration
	I0826 04:11:51.500513    4050 start.go:297] selected driver: qemu2
	I0826 04:11:51.500523    4050 start.go:901] validating driver "qemu2" against <nil>
	I0826 04:11:51.500530    4050 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 04:11:51.502635    4050 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0826 04:11:51.505449    4050 out.go:177] * Automatically selected the socket_vmnet network
	I0826 04:11:51.509474    4050 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0826 04:11:51.509496    4050 cni.go:84] Creating CNI manager for ""
	I0826 04:11:51.509512    4050 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0826 04:11:51.509533    4050 start.go:340] cluster config:
	{Name:kubernetes-upgrade-759000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-759000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 04:11:51.513084    4050 iso.go:125] acquiring lock: {Name:mk859bee1c7de58c8a10e75b01bd87b0e1e74bdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:11:51.521481    4050 out.go:177] * Starting "kubernetes-upgrade-759000" primary control-plane node in "kubernetes-upgrade-759000" cluster
	I0826 04:11:51.525496    4050 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0826 04:11:51.525514    4050 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0826 04:11:51.525526    4050 cache.go:56] Caching tarball of preloaded images
	I0826 04:11:51.525596    4050 preload.go:172] Found /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0826 04:11:51.525602    4050 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0826 04:11:51.525684    4050 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/kubernetes-upgrade-759000/config.json ...
	I0826 04:11:51.525697    4050 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/kubernetes-upgrade-759000/config.json: {Name:mk8616bb0e3247d76c0ea38cce229f9f22d7c4b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 04:11:51.526109    4050 start.go:360] acquireMachinesLock for kubernetes-upgrade-759000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:11:51.560092    4050 start.go:364] duration metric: took 33.971083ms to acquireMachinesLock for "kubernetes-upgrade-759000"
	I0826 04:11:51.560114    4050 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-759000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-759000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0826 04:11:51.560175    4050 start.go:125] createHost starting for "" (driver="qemu2")
	I0826 04:11:51.569515    4050 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0826 04:11:51.591487    4050 start.go:159] libmachine.API.Create for "kubernetes-upgrade-759000" (driver="qemu2")
	I0826 04:11:51.591518    4050 client.go:168] LocalClient.Create starting
	I0826 04:11:51.591592    4050 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem
	I0826 04:11:51.591634    4050 main.go:141] libmachine: Decoding PEM data...
	I0826 04:11:51.591651    4050 main.go:141] libmachine: Parsing certificate...
	I0826 04:11:51.591696    4050 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/cert.pem
	I0826 04:11:51.591724    4050 main.go:141] libmachine: Decoding PEM data...
	I0826 04:11:51.591737    4050 main.go:141] libmachine: Parsing certificate...
	I0826 04:11:51.592141    4050 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19501-1045/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0826 04:11:51.837294    4050 main.go:141] libmachine: Creating SSH key...
	I0826 04:11:52.038922    4050 main.go:141] libmachine: Creating Disk image...
	I0826 04:11:52.038930    4050 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0826 04:11:52.039117    4050 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kubernetes-upgrade-759000/disk.qcow2.raw /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kubernetes-upgrade-759000/disk.qcow2
	I0826 04:11:52.048612    4050 main.go:141] libmachine: STDOUT: 
	I0826 04:11:52.048630    4050 main.go:141] libmachine: STDERR: 
	I0826 04:11:52.048677    4050 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kubernetes-upgrade-759000/disk.qcow2 +20000M
	I0826 04:11:52.056628    4050 main.go:141] libmachine: STDOUT: Image resized.
	
	I0826 04:11:52.056650    4050 main.go:141] libmachine: STDERR: 
	I0826 04:11:52.056667    4050 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kubernetes-upgrade-759000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kubernetes-upgrade-759000/disk.qcow2
	I0826 04:11:52.056673    4050 main.go:141] libmachine: Starting QEMU VM...
	I0826 04:11:52.056681    4050 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:11:52.056707    4050 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kubernetes-upgrade-759000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kubernetes-upgrade-759000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kubernetes-upgrade-759000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:eb:95:95:52:a9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kubernetes-upgrade-759000/disk.qcow2
	I0826 04:11:52.058280    4050 main.go:141] libmachine: STDOUT: 
	I0826 04:11:52.058297    4050 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:11:52.058316    4050 client.go:171] duration metric: took 466.799167ms to LocalClient.Create
	I0826 04:11:54.060447    4050 start.go:128] duration metric: took 2.500295375s to createHost
	I0826 04:11:54.060499    4050 start.go:83] releasing machines lock for "kubernetes-upgrade-759000", held for 2.500436125s
	W0826 04:11:54.060589    4050 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:11:54.081543    4050 out.go:177] * Deleting "kubernetes-upgrade-759000" in qemu2 ...
	W0826 04:11:54.115149    4050 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:11:54.115177    4050 start.go:729] Will try again in 5 seconds ...
	I0826 04:11:59.117250    4050 start.go:360] acquireMachinesLock for kubernetes-upgrade-759000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:11:59.117381    4050 start.go:364] duration metric: took 85.75µs to acquireMachinesLock for "kubernetes-upgrade-759000"
	I0826 04:11:59.117407    4050 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-759000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-759000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0826 04:11:59.117467    4050 start.go:125] createHost starting for "" (driver="qemu2")
	I0826 04:11:59.126862    4050 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0826 04:11:59.145369    4050 start.go:159] libmachine.API.Create for "kubernetes-upgrade-759000" (driver="qemu2")
	I0826 04:11:59.145400    4050 client.go:168] LocalClient.Create starting
	I0826 04:11:59.145461    4050 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem
	I0826 04:11:59.145499    4050 main.go:141] libmachine: Decoding PEM data...
	I0826 04:11:59.145521    4050 main.go:141] libmachine: Parsing certificate...
	I0826 04:11:59.145555    4050 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/cert.pem
	I0826 04:11:59.145579    4050 main.go:141] libmachine: Decoding PEM data...
	I0826 04:11:59.145588    4050 main.go:141] libmachine: Parsing certificate...
	I0826 04:11:59.145874    4050 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19501-1045/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0826 04:11:59.299278    4050 main.go:141] libmachine: Creating SSH key...
	I0826 04:11:59.378857    4050 main.go:141] libmachine: Creating Disk image...
	I0826 04:11:59.378862    4050 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0826 04:11:59.379018    4050 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kubernetes-upgrade-759000/disk.qcow2.raw /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kubernetes-upgrade-759000/disk.qcow2
	I0826 04:11:59.388573    4050 main.go:141] libmachine: STDOUT: 
	I0826 04:11:59.388594    4050 main.go:141] libmachine: STDERR: 
	I0826 04:11:59.388658    4050 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kubernetes-upgrade-759000/disk.qcow2 +20000M
	I0826 04:11:59.397016    4050 main.go:141] libmachine: STDOUT: Image resized.
	
	I0826 04:11:59.397031    4050 main.go:141] libmachine: STDERR: 
	I0826 04:11:59.397045    4050 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kubernetes-upgrade-759000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kubernetes-upgrade-759000/disk.qcow2
	I0826 04:11:59.397050    4050 main.go:141] libmachine: Starting QEMU VM...
	I0826 04:11:59.397060    4050 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:11:59.397091    4050 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kubernetes-upgrade-759000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kubernetes-upgrade-759000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kubernetes-upgrade-759000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:4d:70:2e:c8:a5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kubernetes-upgrade-759000/disk.qcow2
	I0826 04:11:59.398702    4050 main.go:141] libmachine: STDOUT: 
	I0826 04:11:59.398717    4050 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:11:59.398729    4050 client.go:171] duration metric: took 253.330333ms to LocalClient.Create
	I0826 04:12:01.399035    4050 start.go:128] duration metric: took 2.281577s to createHost
	I0826 04:12:01.399104    4050 start.go:83] releasing machines lock for "kubernetes-upgrade-759000", held for 2.281748583s
	W0826 04:12:01.399410    4050 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-759000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-759000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:12:01.407996    4050 out.go:201] 
	W0826 04:12:01.412248    4050 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0826 04:12:01.412290    4050 out.go:270] * 
	* 
	W0826 04:12:01.414071    4050 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0826 04:12:01.423974    4050 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-759000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-759000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-759000: (3.487547583s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-759000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-759000 status --format={{.Host}}: exit status 7 (56.469291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-759000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-759000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.242224209s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-759000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-759000" primary control-plane node in "kubernetes-upgrade-759000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-759000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-759000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:12:05.013968    4100 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:12:05.014081    4100 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:12:05.014084    4100 out.go:358] Setting ErrFile to fd 2...
	I0826 04:12:05.014087    4100 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:12:05.014220    4100 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:12:05.015452    4100 out.go:352] Setting JSON to false
	I0826 04:12:05.032332    4100 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2488,"bootTime":1724668237,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0826 04:12:05.032405    4100 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0826 04:12:05.037037    4100 out.go:177] * [kubernetes-upgrade-759000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0826 04:12:05.044033    4100 notify.go:220] Checking for updates...
	I0826 04:12:05.047080    4100 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 04:12:05.054088    4100 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	I0826 04:12:05.077107    4100 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0826 04:12:05.085125    4100 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 04:12:05.092062    4100 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	I0826 04:12:05.099061    4100 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 04:12:05.103382    4100 config.go:182] Loaded profile config "kubernetes-upgrade-759000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0826 04:12:05.103632    4100 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 04:12:05.106995    4100 out.go:177] * Using the qemu2 driver based on existing profile
	I0826 04:12:05.114082    4100 start.go:297] selected driver: qemu2
	I0826 04:12:05.114088    4100 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-759000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-759000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 04:12:05.114140    4100 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 04:12:05.116518    4100 cni.go:84] Creating CNI manager for ""
	I0826 04:12:05.116537    4100 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0826 04:12:05.116565    4100 start.go:340] cluster config:
	{Name:kubernetes-upgrade-759000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-759000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 04:12:05.120211    4100 iso.go:125] acquiring lock: {Name:mk859bee1c7de58c8a10e75b01bd87b0e1e74bdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:12:05.128119    4100 out.go:177] * Starting "kubernetes-upgrade-759000" primary control-plane node in "kubernetes-upgrade-759000" cluster
	I0826 04:12:05.130995    4100 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0826 04:12:05.131007    4100 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0826 04:12:05.131014    4100 cache.go:56] Caching tarball of preloaded images
	I0826 04:12:05.131067    4100 preload.go:172] Found /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0826 04:12:05.131072    4100 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0826 04:12:05.131128    4100 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/kubernetes-upgrade-759000/config.json ...
	I0826 04:12:05.131392    4100 start.go:360] acquireMachinesLock for kubernetes-upgrade-759000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:12:05.131417    4100 start.go:364] duration metric: took 19.417µs to acquireMachinesLock for "kubernetes-upgrade-759000"
	I0826 04:12:05.131430    4100 start.go:96] Skipping create...Using existing machine configuration
	I0826 04:12:05.131436    4100 fix.go:54] fixHost starting: 
	I0826 04:12:05.131540    4100 fix.go:112] recreateIfNeeded on kubernetes-upgrade-759000: state=Stopped err=<nil>
	W0826 04:12:05.131549    4100 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 04:12:05.139072    4100 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-759000" ...
	I0826 04:12:05.152497    4100 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:12:05.152537    4100 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kubernetes-upgrade-759000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kubernetes-upgrade-759000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kubernetes-upgrade-759000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:4d:70:2e:c8:a5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kubernetes-upgrade-759000/disk.qcow2
	I0826 04:12:05.154563    4100 main.go:141] libmachine: STDOUT: 
	I0826 04:12:05.154580    4100 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:12:05.154608    4100 fix.go:56] duration metric: took 23.173042ms for fixHost
	I0826 04:12:05.154611    4100 start.go:83] releasing machines lock for "kubernetes-upgrade-759000", held for 23.190541ms
	W0826 04:12:05.154618    4100 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0826 04:12:05.154651    4100 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:12:05.154654    4100 start.go:729] Will try again in 5 seconds ...
	I0826 04:12:10.155675    4100 start.go:360] acquireMachinesLock for kubernetes-upgrade-759000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:12:10.156372    4100 start.go:364] duration metric: took 522.542µs to acquireMachinesLock for "kubernetes-upgrade-759000"
	I0826 04:12:10.156511    4100 start.go:96] Skipping create...Using existing machine configuration
	I0826 04:12:10.156533    4100 fix.go:54] fixHost starting: 
	I0826 04:12:10.157323    4100 fix.go:112] recreateIfNeeded on kubernetes-upgrade-759000: state=Stopped err=<nil>
	W0826 04:12:10.157349    4100 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 04:12:10.166890    4100 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-759000" ...
	I0826 04:12:10.171856    4100 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:12:10.172243    4100 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kubernetes-upgrade-759000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kubernetes-upgrade-759000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kubernetes-upgrade-759000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:4d:70:2e:c8:a5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kubernetes-upgrade-759000/disk.qcow2
	I0826 04:12:10.182120    4100 main.go:141] libmachine: STDOUT: 
	I0826 04:12:10.182200    4100 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:12:10.182295    4100 fix.go:56] duration metric: took 25.763958ms for fixHost
	I0826 04:12:10.182315    4100 start.go:83] releasing machines lock for "kubernetes-upgrade-759000", held for 25.917292ms
	W0826 04:12:10.182524    4100 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-759000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-759000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:12:10.190859    4100 out.go:201] 
	W0826 04:12:10.198030    4100 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0826 04:12:10.198062    4100 out.go:270] * 
	* 
	W0826 04:12:10.200775    4100 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0826 04:12:10.212912    4100 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-759000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-759000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-759000 version --output=json: exit status 1 (61.107292ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-759000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-08-26 04:12:10.288243 -0700 PDT m=+2257.922904376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-759000 -n kubernetes-upgrade-759000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-759000 -n kubernetes-upgrade-759000: exit status 7 (33.729292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-759000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-759000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-759000
--- FAIL: TestKubernetesUpgrade (19.00s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (595.56s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.288772989 start -p stopped-upgrade-743000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.288772989 start -p stopped-upgrade-743000 --memory=2200 --vm-driver=qemu2 : (1m1.774647375s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.288772989 -p stopped-upgrade-743000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.288772989 -p stopped-upgrade-743000 stop: (12.0986s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-743000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-743000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m41.573880083s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-743000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-743000" primary control-plane node in "stopped-upgrade-743000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-743000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:13:14.077665    4148 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:13:14.077879    4148 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:13:14.077882    4148 out.go:358] Setting ErrFile to fd 2...
	I0826 04:13:14.077885    4148 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:13:14.078013    4148 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:13:14.079189    4148 out.go:352] Setting JSON to false
	I0826 04:13:14.097478    4148 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2557,"bootTime":1724668237,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0826 04:13:14.097549    4148 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0826 04:13:14.102196    4148 out.go:177] * [stopped-upgrade-743000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0826 04:13:14.106261    4148 notify.go:220] Checking for updates...
	I0826 04:13:14.111167    4148 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 04:13:14.118193    4148 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	I0826 04:13:14.128179    4148 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0826 04:13:14.135200    4148 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 04:13:14.142112    4148 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	I0826 04:13:14.145094    4148 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 04:13:14.148495    4148 config.go:182] Loaded profile config "stopped-upgrade-743000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0826 04:13:14.151137    4148 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0826 04:13:14.154166    4148 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 04:13:14.158131    4148 out.go:177] * Using the qemu2 driver based on existing profile
	I0826 04:13:14.165154    4148 start.go:297] selected driver: qemu2
	I0826 04:13:14.165165    4148 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-743000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50261 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-743000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0826 04:13:14.165239    4148 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 04:13:14.168209    4148 cni.go:84] Creating CNI manager for ""
	I0826 04:13:14.168232    4148 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0826 04:13:14.168258    4148 start.go:340] cluster config:
	{Name:stopped-upgrade-743000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50261 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-743000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0826 04:13:14.168323    4148 iso.go:125] acquiring lock: {Name:mk859bee1c7de58c8a10e75b01bd87b0e1e74bdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:13:14.176093    4148 out.go:177] * Starting "stopped-upgrade-743000" primary control-plane node in "stopped-upgrade-743000" cluster
	I0826 04:13:14.180094    4148 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0826 04:13:14.180115    4148 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0826 04:13:14.180120    4148 cache.go:56] Caching tarball of preloaded images
	I0826 04:13:14.180188    4148 preload.go:172] Found /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0826 04:13:14.180194    4148 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0826 04:13:14.180244    4148 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/stopped-upgrade-743000/config.json ...
	I0826 04:13:14.180624    4148 start.go:360] acquireMachinesLock for stopped-upgrade-743000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:13:14.180656    4148 start.go:364] duration metric: took 23.875µs to acquireMachinesLock for "stopped-upgrade-743000"
	I0826 04:13:14.180664    4148 start.go:96] Skipping create...Using existing machine configuration
	I0826 04:13:14.180669    4148 fix.go:54] fixHost starting: 
	I0826 04:13:14.180778    4148 fix.go:112] recreateIfNeeded on stopped-upgrade-743000: state=Stopped err=<nil>
	W0826 04:13:14.180786    4148 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 04:13:14.185160    4148 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-743000" ...
	I0826 04:13:14.193190    4148 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:13:14.193290    4148 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/stopped-upgrade-743000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/stopped-upgrade-743000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/stopped-upgrade-743000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50229-:22,hostfwd=tcp::50230-:2376,hostname=stopped-upgrade-743000 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/stopped-upgrade-743000/disk.qcow2
	I0826 04:13:14.241371    4148 main.go:141] libmachine: STDOUT: 
	I0826 04:13:14.241401    4148 main.go:141] libmachine: STDERR: 
	I0826 04:13:14.241408    4148 main.go:141] libmachine: Waiting for VM to start (ssh -p 50229 docker@127.0.0.1)...
	I0826 04:13:33.917023    4148 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/stopped-upgrade-743000/config.json ...
	I0826 04:13:33.918082    4148 machine.go:93] provisionDockerMachine start ...
	I0826 04:13:33.918263    4148 main.go:141] libmachine: Using SSH client type: native
	I0826 04:13:33.918842    4148 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1025d85a0] 0x1025dae00 <nil>  [] 0s} localhost 50229 <nil> <nil>}
	I0826 04:13:33.918867    4148 main.go:141] libmachine: About to run SSH command:
	hostname
	I0826 04:13:33.993525    4148 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0826 04:13:33.993549    4148 buildroot.go:166] provisioning hostname "stopped-upgrade-743000"
	I0826 04:13:33.993610    4148 main.go:141] libmachine: Using SSH client type: native
	I0826 04:13:33.993761    4148 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1025d85a0] 0x1025dae00 <nil>  [] 0s} localhost 50229 <nil> <nil>}
	I0826 04:13:33.993769    4148 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-743000 && echo "stopped-upgrade-743000" | sudo tee /etc/hostname
	I0826 04:13:34.058280    4148 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-743000
	
	I0826 04:13:34.058336    4148 main.go:141] libmachine: Using SSH client type: native
	I0826 04:13:34.058453    4148 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1025d85a0] 0x1025dae00 <nil>  [] 0s} localhost 50229 <nil> <nil>}
	I0826 04:13:34.058462    4148 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-743000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-743000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-743000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0826 04:13:34.120739    4148 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 04:13:34.120754    4148 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19501-1045/.minikube CaCertPath:/Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19501-1045/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19501-1045/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19501-1045/.minikube}
	I0826 04:13:34.120767    4148 buildroot.go:174] setting up certificates
	I0826 04:13:34.120771    4148 provision.go:84] configureAuth start
	I0826 04:13:34.120777    4148 provision.go:143] copyHostCerts
	I0826 04:13:34.120858    4148 exec_runner.go:144] found /Users/jenkins/minikube-integration/19501-1045/.minikube/key.pem, removing ...
	I0826 04:13:34.120864    4148 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19501-1045/.minikube/key.pem
	I0826 04:13:34.121044    4148 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19501-1045/.minikube/key.pem (1675 bytes)
	I0826 04:13:34.121686    4148 exec_runner.go:144] found /Users/jenkins/minikube-integration/19501-1045/.minikube/ca.pem, removing ...
	I0826 04:13:34.121692    4148 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19501-1045/.minikube/ca.pem
	I0826 04:13:34.121748    4148 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19501-1045/.minikube/ca.pem (1082 bytes)
	I0826 04:13:34.121861    4148 exec_runner.go:144] found /Users/jenkins/minikube-integration/19501-1045/.minikube/cert.pem, removing ...
	I0826 04:13:34.121864    4148 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19501-1045/.minikube/cert.pem
	I0826 04:13:34.121910    4148 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19501-1045/.minikube/cert.pem (1123 bytes)
	I0826 04:13:34.121994    4148 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-743000 san=[127.0.0.1 localhost minikube stopped-upgrade-743000]
	I0826 04:13:34.279984    4148 provision.go:177] copyRemoteCerts
	I0826 04:13:34.280024    4148 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0826 04:13:34.280034    4148 sshutil.go:53] new ssh client: &{IP:localhost Port:50229 SSHKeyPath:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/stopped-upgrade-743000/id_rsa Username:docker}
	I0826 04:13:34.310958    4148 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0826 04:13:34.317552    4148 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0826 04:13:34.324519    4148 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0826 04:13:34.331671    4148 provision.go:87] duration metric: took 210.898541ms to configureAuth
	I0826 04:13:34.331680    4148 buildroot.go:189] setting minikube options for container-runtime
	I0826 04:13:34.331797    4148 config.go:182] Loaded profile config "stopped-upgrade-743000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0826 04:13:34.331840    4148 main.go:141] libmachine: Using SSH client type: native
	I0826 04:13:34.331934    4148 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1025d85a0] 0x1025dae00 <nil>  [] 0s} localhost 50229 <nil> <nil>}
	I0826 04:13:34.331939    4148 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0826 04:13:34.390564    4148 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0826 04:13:34.390573    4148 buildroot.go:70] root file system type: tmpfs
	I0826 04:13:34.390625    4148 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0826 04:13:34.390674    4148 main.go:141] libmachine: Using SSH client type: native
	I0826 04:13:34.390795    4148 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1025d85a0] 0x1025dae00 <nil>  [] 0s} localhost 50229 <nil> <nil>}
	I0826 04:13:34.390834    4148 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0826 04:13:34.452158    4148 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0826 04:13:34.452205    4148 main.go:141] libmachine: Using SSH client type: native
	I0826 04:13:34.452310    4148 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1025d85a0] 0x1025dae00 <nil>  [] 0s} localhost 50229 <nil> <nil>}
	I0826 04:13:34.452318    4148 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0826 04:13:34.821315    4148 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0826 04:13:34.821328    4148 machine.go:96] duration metric: took 903.240125ms to provisionDockerMachine
	I0826 04:13:34.821336    4148 start.go:293] postStartSetup for "stopped-upgrade-743000" (driver="qemu2")
	I0826 04:13:34.821343    4148 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0826 04:13:34.821413    4148 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0826 04:13:34.821423    4148 sshutil.go:53] new ssh client: &{IP:localhost Port:50229 SSHKeyPath:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/stopped-upgrade-743000/id_rsa Username:docker}
	I0826 04:13:34.856160    4148 ssh_runner.go:195] Run: cat /etc/os-release
	I0826 04:13:34.857396    4148 info.go:137] Remote host: Buildroot 2021.02.12
	I0826 04:13:34.857406    4148 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19501-1045/.minikube/addons for local assets ...
	I0826 04:13:34.857489    4148 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19501-1045/.minikube/files for local assets ...
	I0826 04:13:34.857621    4148 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19501-1045/.minikube/files/etc/ssl/certs/15392.pem -> 15392.pem in /etc/ssl/certs
	I0826 04:13:34.857751    4148 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0826 04:13:34.860537    4148 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19501-1045/.minikube/files/etc/ssl/certs/15392.pem --> /etc/ssl/certs/15392.pem (1708 bytes)
	I0826 04:13:34.867447    4148 start.go:296] duration metric: took 46.1065ms for postStartSetup
	I0826 04:13:34.867461    4148 fix.go:56] duration metric: took 20.687131208s for fixHost
	I0826 04:13:34.867493    4148 main.go:141] libmachine: Using SSH client type: native
	I0826 04:13:34.867598    4148 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1025d85a0] 0x1025dae00 <nil>  [] 0s} localhost 50229 <nil> <nil>}
	I0826 04:13:34.867603    4148 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0826 04:13:34.926226    4148 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724670814.893490254
	
	I0826 04:13:34.926235    4148 fix.go:216] guest clock: 1724670814.893490254
	I0826 04:13:34.926240    4148 fix.go:229] Guest: 2024-08-26 04:13:34.893490254 -0700 PDT Remote: 2024-08-26 04:13:34.867465 -0700 PDT m=+20.810013292 (delta=26.025254ms)
	I0826 04:13:34.926252    4148 fix.go:200] guest clock delta is within tolerance: 26.025254ms
	I0826 04:13:34.926255    4148 start.go:83] releasing machines lock for "stopped-upgrade-743000", held for 20.745932375s
	I0826 04:13:34.926345    4148 ssh_runner.go:195] Run: cat /version.json
	I0826 04:13:34.926355    4148 sshutil.go:53] new ssh client: &{IP:localhost Port:50229 SSHKeyPath:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/stopped-upgrade-743000/id_rsa Username:docker}
	I0826 04:13:34.926345    4148 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0826 04:13:34.926388    4148 sshutil.go:53] new ssh client: &{IP:localhost Port:50229 SSHKeyPath:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/stopped-upgrade-743000/id_rsa Username:docker}
	W0826 04:13:34.927053    4148 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50454->127.0.0.1:50229: read: connection reset by peer
	I0826 04:13:34.927072    4148 retry.go:31] will retry after 206.628763ms: ssh: handshake failed: read tcp 127.0.0.1:50454->127.0.0.1:50229: read: connection reset by peer
	W0826 04:13:34.956633    4148 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0826 04:13:34.956691    4148 ssh_runner.go:195] Run: systemctl --version
	I0826 04:13:34.958534    4148 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0826 04:13:34.960223    4148 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0826 04:13:34.960257    4148 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0826 04:13:34.963606    4148 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0826 04:13:34.968264    4148 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0826 04:13:34.968275    4148 start.go:495] detecting cgroup driver to use...
	I0826 04:13:34.968363    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0826 04:13:34.975173    4148 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0826 04:13:34.978098    4148 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0826 04:13:34.981218    4148 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0826 04:13:34.981247    4148 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0826 04:13:34.984815    4148 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0826 04:13:34.988486    4148 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0826 04:13:34.991801    4148 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0826 04:13:34.995075    4148 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0826 04:13:34.998289    4148 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0826 04:13:35.001767    4148 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0826 04:13:35.005114    4148 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0826 04:13:35.008558    4148 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0826 04:13:35.011538    4148 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0826 04:13:35.014284    4148 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 04:13:35.085440    4148 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0826 04:13:35.097046    4148 start.go:495] detecting cgroup driver to use...
	I0826 04:13:35.097124    4148 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0826 04:13:35.103036    4148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0826 04:13:35.108513    4148 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0826 04:13:35.116159    4148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0826 04:13:35.120807    4148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0826 04:13:35.126012    4148 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0826 04:13:35.165348    4148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0826 04:13:35.209788    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0826 04:13:35.215387    4148 ssh_runner.go:195] Run: which cri-dockerd
	I0826 04:13:35.216638    4148 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0826 04:13:35.219476    4148 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0826 04:13:35.224971    4148 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0826 04:13:35.306711    4148 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0826 04:13:35.385812    4148 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0826 04:13:35.385867    4148 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0826 04:13:35.391328    4148 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 04:13:35.465635    4148 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0826 04:13:36.600814    4148 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.13517875s)
	I0826 04:13:36.600892    4148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0826 04:13:36.607169    4148 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0826 04:13:36.614938    4148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0826 04:13:36.620829    4148 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0826 04:13:36.706302    4148 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0826 04:13:36.791897    4148 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 04:13:36.865844    4148 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0826 04:13:36.871821    4148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0826 04:13:36.876231    4148 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 04:13:36.952427    4148 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0826 04:13:36.992330    4148 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0826 04:13:36.992407    4148 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0826 04:13:36.994866    4148 start.go:563] Will wait 60s for crictl version
	I0826 04:13:36.994925    4148 ssh_runner.go:195] Run: which crictl
	I0826 04:13:36.996254    4148 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0826 04:13:37.010995    4148 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0826 04:13:37.011061    4148 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0826 04:13:37.026996    4148 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0826 04:13:37.046873    4148 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0826 04:13:37.046971    4148 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0826 04:13:37.048613    4148 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 04:13:37.052713    4148 kubeadm.go:883] updating cluster {Name:stopped-upgrade-743000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50261 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-743000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0826 04:13:37.052756    4148 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0826 04:13:37.052798    4148 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0826 04:13:37.067469    4148 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0826 04:13:37.067479    4148 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0826 04:13:37.067538    4148 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0826 04:13:37.070481    4148 ssh_runner.go:195] Run: which lz4
	I0826 04:13:37.071734    4148 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0826 04:13:37.072946    4148 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0826 04:13:37.072957    4148 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0826 04:13:37.990694    4148 docker.go:649] duration metric: took 919.004458ms to copy over tarball
	I0826 04:13:37.990755    4148 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0826 04:13:39.163615    4148 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.172860208s)
	I0826 04:13:39.163632    4148 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0826 04:13:39.179369    4148 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0826 04:13:39.182455    4148 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0826 04:13:39.187766    4148 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 04:13:39.265998    4148 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0826 04:13:40.979922    4148 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.713933625s)
	I0826 04:13:40.980005    4148 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0826 04:13:40.991565    4148 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0826 04:13:40.991577    4148 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0826 04:13:40.991582    4148 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0826 04:13:40.996205    4148 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0826 04:13:40.998457    4148 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 04:13:41.000042    4148 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0826 04:13:41.000890    4148 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0826 04:13:41.001990    4148 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0826 04:13:41.003731    4148 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0826 04:13:41.003831    4148 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 04:13:41.005938    4148 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0826 04:13:41.005969    4148 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0826 04:13:41.005983    4148 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0826 04:13:41.006449    4148 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0826 04:13:41.007280    4148 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0826 04:13:41.008923    4148 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0826 04:13:41.008955    4148 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0826 04:13:41.009855    4148 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0826 04:13:41.010792    4148 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0826 04:13:41.397758    4148 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0826 04:13:41.408831    4148 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0826 04:13:41.408861    4148 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0826 04:13:41.408914    4148 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0826 04:13:41.418467    4148 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0826 04:13:41.419599    4148 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0826 04:13:41.428511    4148 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0826 04:13:41.428536    4148 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0826 04:13:41.428589    4148 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0826 04:13:41.439040    4148 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0826 04:13:41.439175    4148 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0826 04:13:41.441096    4148 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0826 04:13:41.441115    4148 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0826 04:13:41.449450    4148 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0826 04:13:41.449465    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0826 04:13:41.458103    4148 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	W0826 04:13:41.476273    4148 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0826 04:13:41.476518    4148 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0826 04:13:41.486918    4148 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0826 04:13:41.486992    4148 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0826 04:13:41.487027    4148 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0826 04:13:41.487126    4148 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0826 04:13:41.489476    4148 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0826 04:13:41.489496    4148 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0826 04:13:41.489537    4148 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0826 04:13:41.492435    4148 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0826 04:13:41.503605    4148 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0826 04:13:41.503830    4148 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0826 04:13:41.508258    4148 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0826 04:13:41.508295    4148 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0826 04:13:41.508323    4148 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0826 04:13:41.508369    4148 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0826 04:13:41.508384    4148 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0826 04:13:41.508845    4148 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0826 04:13:41.508859    4148 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0826 04:13:41.526824    4148 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0826 04:13:41.526900    4148 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0826 04:13:41.526923    4148 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0826 04:13:41.566413    4148 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0826 04:13:41.569550    4148 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0826 04:13:41.615416    4148 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0826 04:13:41.615440    4148 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0826 04:13:41.615495    4148 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0826 04:13:41.616569    4148 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0826 04:13:41.616583    4148 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0826 04:13:41.616615    4148 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0826 04:13:41.627445    4148 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0826 04:13:41.627471    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	W0826 04:13:41.652635    4148 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0826 04:13:41.652762    4148 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 04:13:41.674788    4148 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0826 04:13:41.678547    4148 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0826 04:13:41.730330    4148 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0826 04:13:41.730355    4148 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0826 04:13:41.730376    4148 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 04:13:41.730433    4148 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 04:13:41.766436    4148 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0826 04:13:41.766569    4148 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0826 04:13:41.777932    4148 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0826 04:13:41.777967    4148 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0826 04:13:41.857035    4148 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0826 04:13:41.857051    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0826 04:13:42.168510    4148 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0826 04:13:42.168535    4148 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0826 04:13:42.168541    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0826 04:13:42.310183    4148 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0826 04:13:42.310226    4148 cache_images.go:92] duration metric: took 1.318658792s to LoadCachedImages
	W0826 04:13:42.310263    4148 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0826 04:13:42.310269    4148 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0826 04:13:42.310322    4148 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-743000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-743000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0826 04:13:42.310401    4148 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0826 04:13:42.329129    4148 cni.go:84] Creating CNI manager for ""
	I0826 04:13:42.329140    4148 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0826 04:13:42.329145    4148 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0826 04:13:42.329153    4148 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-743000 NodeName:stopped-upgrade-743000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0826 04:13:42.329224    4148 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-743000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0826 04:13:42.329301    4148 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0826 04:13:42.332682    4148 binaries.go:44] Found k8s binaries, skipping transfer
	I0826 04:13:42.332719    4148 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0826 04:13:42.335556    4148 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0826 04:13:42.340441    4148 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0826 04:13:42.345399    4148 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0826 04:13:42.350795    4148 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0826 04:13:42.352009    4148 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 04:13:42.355813    4148 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 04:13:42.441956    4148 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 04:13:42.452142    4148 certs.go:68] Setting up /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/stopped-upgrade-743000 for IP: 10.0.2.15
	I0826 04:13:42.452153    4148 certs.go:194] generating shared ca certs ...
	I0826 04:13:42.452162    4148 certs.go:226] acquiring lock for ca certs: {Name:mk94fc9641f4dd57ada21caac2320dd5698e14b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 04:13:42.452346    4148 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19501-1045/.minikube/ca.key
	I0826 04:13:42.452431    4148 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19501-1045/.minikube/proxy-client-ca.key
	I0826 04:13:42.452437    4148 certs.go:256] generating profile certs ...
	I0826 04:13:42.452521    4148 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/stopped-upgrade-743000/client.key
	I0826 04:13:42.452546    4148 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/stopped-upgrade-743000/apiserver.key.5b68802c
	I0826 04:13:42.452563    4148 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/stopped-upgrade-743000/apiserver.crt.5b68802c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0826 04:13:42.599111    4148 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/stopped-upgrade-743000/apiserver.crt.5b68802c ...
	I0826 04:13:42.599124    4148 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/stopped-upgrade-743000/apiserver.crt.5b68802c: {Name:mka711d8503202f32ae1405c51f23e26645c99ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 04:13:42.599428    4148 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/stopped-upgrade-743000/apiserver.key.5b68802c ...
	I0826 04:13:42.599433    4148 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/stopped-upgrade-743000/apiserver.key.5b68802c: {Name:mkb4269a5b538ca3c41c4adc929752d70b61160a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 04:13:42.599583    4148 certs.go:381] copying /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/stopped-upgrade-743000/apiserver.crt.5b68802c -> /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/stopped-upgrade-743000/apiserver.crt
	I0826 04:13:42.599728    4148 certs.go:385] copying /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/stopped-upgrade-743000/apiserver.key.5b68802c -> /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/stopped-upgrade-743000/apiserver.key
	I0826 04:13:42.599895    4148 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/stopped-upgrade-743000/proxy-client.key
	I0826 04:13:42.600030    4148 certs.go:484] found cert: /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/1539.pem (1338 bytes)
	W0826 04:13:42.600061    4148 certs.go:480] ignoring /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/1539_empty.pem, impossibly tiny 0 bytes
	I0826 04:13:42.600067    4148 certs.go:484] found cert: /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca-key.pem (1675 bytes)
	I0826 04:13:42.600085    4148 certs.go:484] found cert: /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem (1082 bytes)
	I0826 04:13:42.600103    4148 certs.go:484] found cert: /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/cert.pem (1123 bytes)
	I0826 04:13:42.600121    4148 certs.go:484] found cert: /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/key.pem (1675 bytes)
	I0826 04:13:42.600159    4148 certs.go:484] found cert: /Users/jenkins/minikube-integration/19501-1045/.minikube/files/etc/ssl/certs/15392.pem (1708 bytes)
	I0826 04:13:42.600487    4148 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19501-1045/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0826 04:13:42.607924    4148 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19501-1045/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0826 04:13:42.614773    4148 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19501-1045/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0826 04:13:42.620990    4148 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19501-1045/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0826 04:13:42.628837    4148 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/stopped-upgrade-743000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0826 04:13:42.635932    4148 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/stopped-upgrade-743000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0826 04:13:42.642517    4148 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/stopped-upgrade-743000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0826 04:13:42.649277    4148 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/stopped-upgrade-743000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0826 04:13:42.656499    4148 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/1539.pem --> /usr/share/ca-certificates/1539.pem (1338 bytes)
	I0826 04:13:42.663698    4148 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19501-1045/.minikube/files/etc/ssl/certs/15392.pem --> /usr/share/ca-certificates/15392.pem (1708 bytes)
	I0826 04:13:42.670293    4148 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19501-1045/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0826 04:13:42.677271    4148 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0826 04:13:42.682601    4148 ssh_runner.go:195] Run: openssl version
	I0826 04:13:42.684364    4148 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15392.pem && ln -fs /usr/share/ca-certificates/15392.pem /etc/ssl/certs/15392.pem"
	I0826 04:13:42.687260    4148 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15392.pem
	I0826 04:13:42.688551    4148 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 26 10:42 /usr/share/ca-certificates/15392.pem
	I0826 04:13:42.688580    4148 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15392.pem
	I0826 04:13:42.690242    4148 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15392.pem /etc/ssl/certs/3ec20f2e.0"
	I0826 04:13:42.693408    4148 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0826 04:13:42.696696    4148 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0826 04:13:42.698057    4148 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 26 10:35 /usr/share/ca-certificates/minikubeCA.pem
	I0826 04:13:42.698078    4148 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0826 04:13:42.699701    4148 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0826 04:13:42.702677    4148 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1539.pem && ln -fs /usr/share/ca-certificates/1539.pem /etc/ssl/certs/1539.pem"
	I0826 04:13:42.705564    4148 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1539.pem
	I0826 04:13:42.707110    4148 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 26 10:42 /usr/share/ca-certificates/1539.pem
	I0826 04:13:42.707132    4148 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1539.pem
	I0826 04:13:42.709013    4148 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1539.pem /etc/ssl/certs/51391683.0"
	I0826 04:13:42.713057    4148 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0826 04:13:42.714520    4148 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0826 04:13:42.716379    4148 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0826 04:13:42.718210    4148 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0826 04:13:42.720004    4148 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0826 04:13:42.721787    4148 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0826 04:13:42.723640    4148 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0826 04:13:42.725416    4148 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-743000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50261 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-743000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0826 04:13:42.725484    4148 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0826 04:13:42.736014    4148 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0826 04:13:42.739128    4148 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0826 04:13:42.739135    4148 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0826 04:13:42.739158    4148 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0826 04:13:42.742617    4148 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0826 04:13:42.742851    4148 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-743000" does not appear in /Users/jenkins/minikube-integration/19501-1045/kubeconfig
	I0826 04:13:42.742901    4148 kubeconfig.go:62] /Users/jenkins/minikube-integration/19501-1045/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-743000" cluster setting kubeconfig missing "stopped-upgrade-743000" context setting]
	I0826 04:13:42.743076    4148 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19501-1045/kubeconfig: {Name:mk689667536e8273d65b27bdc18d08f46d2d09b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 04:13:42.743483    4148 kapi.go:59] client config for stopped-upgrade-743000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/stopped-upgrade-743000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/stopped-upgrade-743000/client.key", CAFile:"/Users/jenkins/minikube-integration/19501-1045/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103b93d30), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0826 04:13:42.743803    4148 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0826 04:13:42.746539    4148 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-743000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0826 04:13:42.746546    4148 kubeadm.go:1160] stopping kube-system containers ...
	I0826 04:13:42.746584    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0826 04:13:42.763695    4148 docker.go:483] Stopping containers: [3e4a8d1b968e 685ee9b0ae9e cc35bfd333d8 71421ff8863d 1d61c7d6f094 db42efb0ce47 a4f6626e87a1 745706e6ede3 8c9ec3306d72]
	I0826 04:13:42.763762    4148 ssh_runner.go:195] Run: docker stop 3e4a8d1b968e 685ee9b0ae9e cc35bfd333d8 71421ff8863d 1d61c7d6f094 db42efb0ce47 a4f6626e87a1 745706e6ede3 8c9ec3306d72
	I0826 04:13:42.774369    4148 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0826 04:13:42.779983    4148 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 04:13:42.782829    4148 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 04:13:42.782836    4148 kubeadm.go:157] found existing configuration files:
	
	I0826 04:13:42.782870    4148 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50261 /etc/kubernetes/admin.conf
	I0826 04:13:42.785475    4148 kubeadm.go:163] "https://control-plane.minikube.internal:50261" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50261 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 04:13:42.785513    4148 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 04:13:42.788729    4148 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50261 /etc/kubernetes/kubelet.conf
	I0826 04:13:42.792118    4148 kubeadm.go:163] "https://control-plane.minikube.internal:50261" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50261 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 04:13:42.792162    4148 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 04:13:42.795119    4148 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50261 /etc/kubernetes/controller-manager.conf
	I0826 04:13:42.797506    4148 kubeadm.go:163] "https://control-plane.minikube.internal:50261" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50261 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 04:13:42.797529    4148 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 04:13:42.800179    4148 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50261 /etc/kubernetes/scheduler.conf
	I0826 04:13:42.802491    4148 kubeadm.go:163] "https://control-plane.minikube.internal:50261" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50261 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 04:13:42.802511    4148 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 04:13:42.805170    4148 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 04:13:42.808175    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 04:13:42.830571    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 04:13:43.504369    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0826 04:13:43.634015    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 04:13:43.657775    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0826 04:13:43.684710    4148 api_server.go:52] waiting for apiserver process to appear ...
	I0826 04:13:43.684796    4148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 04:13:44.186819    4148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 04:13:44.686619    4148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 04:13:44.690974    4148 api_server.go:72] duration metric: took 1.006282208s to wait for apiserver process to appear ...
	I0826 04:13:44.690985    4148 api_server.go:88] waiting for apiserver healthz status ...
	I0826 04:13:44.690994    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:13:49.693176    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:13:49.693258    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:13:54.694079    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:13:54.694163    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:13:59.695037    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:13:59.695058    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:14:04.695832    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:14:04.695891    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:14:09.697327    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:14:09.697407    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:14:14.699132    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:14:14.699156    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:14:19.700922    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:14:19.700942    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:14:24.703045    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:14:24.703087    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:14:29.705010    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:14:29.705058    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:14:34.705778    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:14:34.705842    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:14:39.708158    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:14:39.708201    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:14:44.710413    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:14:44.710645    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:14:44.734870    4148 logs.go:276] 2 containers: [dbe421235bae 685ee9b0ae9e]
	I0826 04:14:44.734973    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:14:44.750645    4148 logs.go:276] 2 containers: [3c8dd03ee7d7 db42efb0ce47]
	I0826 04:14:44.750729    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:14:44.762157    4148 logs.go:276] 1 containers: [c4724eb6b6b4]
	I0826 04:14:44.762229    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:14:44.772366    4148 logs.go:276] 2 containers: [06f55c9d89bb 71421ff8863d]
	I0826 04:14:44.772436    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:14:44.782868    4148 logs.go:276] 1 containers: [c0d71cf0e313]
	I0826 04:14:44.782932    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:14:44.793601    4148 logs.go:276] 2 containers: [7476edc3c059 3e4a8d1b968e]
	I0826 04:14:44.793679    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:14:44.808594    4148 logs.go:276] 0 containers: []
	W0826 04:14:44.808606    4148 logs.go:278] No container was found matching "kindnet"
	I0826 04:14:44.808667    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:14:44.819540    4148 logs.go:276] 1 containers: [0030970326bc]
	I0826 04:14:44.819557    4148 logs.go:123] Gathering logs for kube-apiserver [dbe421235bae] ...
	I0826 04:14:44.819562    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbe421235bae"
	I0826 04:14:44.833471    4148 logs.go:123] Gathering logs for kube-scheduler [71421ff8863d] ...
	I0826 04:14:44.833481    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71421ff8863d"
	I0826 04:14:44.858206    4148 logs.go:123] Gathering logs for kube-proxy [c0d71cf0e313] ...
	I0826 04:14:44.858218    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0d71cf0e313"
	I0826 04:14:44.870339    4148 logs.go:123] Gathering logs for kube-scheduler [06f55c9d89bb] ...
	I0826 04:14:44.870351    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f55c9d89bb"
	I0826 04:14:44.884510    4148 logs.go:123] Gathering logs for kube-controller-manager [7476edc3c059] ...
	I0826 04:14:44.884521    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7476edc3c059"
	I0826 04:14:44.901685    4148 logs.go:123] Gathering logs for kube-controller-manager [3e4a8d1b968e] ...
	I0826 04:14:44.901696    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e4a8d1b968e"
	I0826 04:14:44.915432    4148 logs.go:123] Gathering logs for Docker ...
	I0826 04:14:44.915444    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:14:44.939877    4148 logs.go:123] Gathering logs for kubelet ...
	I0826 04:14:44.939885    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:14:44.975583    4148 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:14:44.975591    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:14:45.054573    4148 logs.go:123] Gathering logs for etcd [3c8dd03ee7d7] ...
	I0826 04:14:45.054586    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c8dd03ee7d7"
	I0826 04:14:45.069237    4148 logs.go:123] Gathering logs for etcd [db42efb0ce47] ...
	I0826 04:14:45.069250    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db42efb0ce47"
	I0826 04:14:45.084847    4148 logs.go:123] Gathering logs for dmesg ...
	I0826 04:14:45.084858    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:14:45.088885    4148 logs.go:123] Gathering logs for kube-apiserver [685ee9b0ae9e] ...
	I0826 04:14:45.088894    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 685ee9b0ae9e"
	I0826 04:14:45.131653    4148 logs.go:123] Gathering logs for coredns [c4724eb6b6b4] ...
	I0826 04:14:45.131664    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4724eb6b6b4"
	I0826 04:14:45.142892    4148 logs.go:123] Gathering logs for storage-provisioner [0030970326bc] ...
	I0826 04:14:45.142905    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0030970326bc"
	I0826 04:14:45.154541    4148 logs.go:123] Gathering logs for container status ...
	I0826 04:14:45.154553    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:14:47.668628    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:14:52.671034    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:14:52.671438    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:14:52.704453    4148 logs.go:276] 2 containers: [dbe421235bae 685ee9b0ae9e]
	I0826 04:14:52.704587    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:14:52.726641    4148 logs.go:276] 2 containers: [3c8dd03ee7d7 db42efb0ce47]
	I0826 04:14:52.726740    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:14:52.741014    4148 logs.go:276] 1 containers: [c4724eb6b6b4]
	I0826 04:14:52.741094    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:14:52.752819    4148 logs.go:276] 2 containers: [06f55c9d89bb 71421ff8863d]
	I0826 04:14:52.752892    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:14:52.763983    4148 logs.go:276] 1 containers: [c0d71cf0e313]
	I0826 04:14:52.764060    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:14:52.779425    4148 logs.go:276] 2 containers: [7476edc3c059 3e4a8d1b968e]
	I0826 04:14:52.779491    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:14:52.789387    4148 logs.go:276] 0 containers: []
	W0826 04:14:52.789398    4148 logs.go:278] No container was found matching "kindnet"
	I0826 04:14:52.789454    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:14:52.800689    4148 logs.go:276] 1 containers: [0030970326bc]
	I0826 04:14:52.800705    4148 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:14:52.800711    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:14:52.838730    4148 logs.go:123] Gathering logs for coredns [c4724eb6b6b4] ...
	I0826 04:14:52.838741    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4724eb6b6b4"
	I0826 04:14:52.849932    4148 logs.go:123] Gathering logs for kube-controller-manager [7476edc3c059] ...
	I0826 04:14:52.849943    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7476edc3c059"
	I0826 04:14:52.867940    4148 logs.go:123] Gathering logs for storage-provisioner [0030970326bc] ...
	I0826 04:14:52.867952    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0030970326bc"
	I0826 04:14:52.879494    4148 logs.go:123] Gathering logs for etcd [db42efb0ce47] ...
	I0826 04:14:52.879508    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db42efb0ce47"
	I0826 04:14:52.893825    4148 logs.go:123] Gathering logs for kube-scheduler [06f55c9d89bb] ...
	I0826 04:14:52.893835    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f55c9d89bb"
	I0826 04:14:52.907784    4148 logs.go:123] Gathering logs for container status ...
	I0826 04:14:52.907795    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:14:52.919195    4148 logs.go:123] Gathering logs for dmesg ...
	I0826 04:14:52.919206    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:14:52.923143    4148 logs.go:123] Gathering logs for kube-scheduler [71421ff8863d] ...
	I0826 04:14:52.923150    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71421ff8863d"
	I0826 04:14:52.944058    4148 logs.go:123] Gathering logs for kube-proxy [c0d71cf0e313] ...
	I0826 04:14:52.944069    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0d71cf0e313"
	I0826 04:14:52.955029    4148 logs.go:123] Gathering logs for kube-controller-manager [3e4a8d1b968e] ...
	I0826 04:14:52.955041    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e4a8d1b968e"
	I0826 04:14:52.968451    4148 logs.go:123] Gathering logs for Docker ...
	I0826 04:14:52.968461    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:14:52.992131    4148 logs.go:123] Gathering logs for kubelet ...
	I0826 04:14:52.992140    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:14:53.028297    4148 logs.go:123] Gathering logs for kube-apiserver [dbe421235bae] ...
	I0826 04:14:53.028306    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbe421235bae"
	I0826 04:14:53.044692    4148 logs.go:123] Gathering logs for kube-apiserver [685ee9b0ae9e] ...
	I0826 04:14:53.044701    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 685ee9b0ae9e"
	I0826 04:14:53.082479    4148 logs.go:123] Gathering logs for etcd [3c8dd03ee7d7] ...
	I0826 04:14:53.082492    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c8dd03ee7d7"
	I0826 04:14:55.598569    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:15:00.600932    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:15:00.601349    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:15:00.638171    4148 logs.go:276] 2 containers: [dbe421235bae 685ee9b0ae9e]
	I0826 04:15:00.638310    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:15:00.661962    4148 logs.go:276] 2 containers: [3c8dd03ee7d7 db42efb0ce47]
	I0826 04:15:00.662060    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:15:00.676107    4148 logs.go:276] 1 containers: [c4724eb6b6b4]
	I0826 04:15:00.676176    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:15:00.688359    4148 logs.go:276] 2 containers: [06f55c9d89bb 71421ff8863d]
	I0826 04:15:00.688436    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:15:00.699256    4148 logs.go:276] 1 containers: [c0d71cf0e313]
	I0826 04:15:00.699325    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:15:00.710996    4148 logs.go:276] 2 containers: [7476edc3c059 3e4a8d1b968e]
	I0826 04:15:00.711057    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:15:00.721138    4148 logs.go:276] 0 containers: []
	W0826 04:15:00.721148    4148 logs.go:278] No container was found matching "kindnet"
	I0826 04:15:00.721200    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:15:00.731606    4148 logs.go:276] 1 containers: [0030970326bc]
	I0826 04:15:00.731622    4148 logs.go:123] Gathering logs for kube-proxy [c0d71cf0e313] ...
	I0826 04:15:00.731628    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0d71cf0e313"
	I0826 04:15:00.743044    4148 logs.go:123] Gathering logs for kube-controller-manager [3e4a8d1b968e] ...
	I0826 04:15:00.743056    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e4a8d1b968e"
	I0826 04:15:00.756570    4148 logs.go:123] Gathering logs for dmesg ...
	I0826 04:15:00.756581    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:15:00.761202    4148 logs.go:123] Gathering logs for etcd [3c8dd03ee7d7] ...
	I0826 04:15:00.761211    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c8dd03ee7d7"
	I0826 04:15:00.775271    4148 logs.go:123] Gathering logs for kube-scheduler [71421ff8863d] ...
	I0826 04:15:00.775282    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71421ff8863d"
	I0826 04:15:00.797649    4148 logs.go:123] Gathering logs for kube-apiserver [dbe421235bae] ...
	I0826 04:15:00.797662    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbe421235bae"
	I0826 04:15:00.811341    4148 logs.go:123] Gathering logs for coredns [c4724eb6b6b4] ...
	I0826 04:15:00.811353    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4724eb6b6b4"
	I0826 04:15:00.822637    4148 logs.go:123] Gathering logs for kube-scheduler [06f55c9d89bb] ...
	I0826 04:15:00.822649    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f55c9d89bb"
	I0826 04:15:00.836579    4148 logs.go:123] Gathering logs for kube-controller-manager [7476edc3c059] ...
	I0826 04:15:00.836592    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7476edc3c059"
	I0826 04:15:00.861433    4148 logs.go:123] Gathering logs for storage-provisioner [0030970326bc] ...
	I0826 04:15:00.861442    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0030970326bc"
	I0826 04:15:00.873193    4148 logs.go:123] Gathering logs for container status ...
	I0826 04:15:00.873204    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:15:00.885243    4148 logs.go:123] Gathering logs for kubelet ...
	I0826 04:15:00.885254    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:15:00.924016    4148 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:15:00.924032    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:15:00.959113    4148 logs.go:123] Gathering logs for kube-apiserver [685ee9b0ae9e] ...
	I0826 04:15:00.959128    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 685ee9b0ae9e"
	I0826 04:15:00.997131    4148 logs.go:123] Gathering logs for etcd [db42efb0ce47] ...
	I0826 04:15:00.997144    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db42efb0ce47"
	I0826 04:15:01.011048    4148 logs.go:123] Gathering logs for Docker ...
	I0826 04:15:01.011058    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:15:03.537791    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:15:08.540404    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:15:08.540641    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:15:08.563314    4148 logs.go:276] 2 containers: [dbe421235bae 685ee9b0ae9e]
	I0826 04:15:08.563421    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:15:08.579234    4148 logs.go:276] 2 containers: [3c8dd03ee7d7 db42efb0ce47]
	I0826 04:15:08.579308    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:15:08.592310    4148 logs.go:276] 1 containers: [c4724eb6b6b4]
	I0826 04:15:08.592390    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:15:08.603378    4148 logs.go:276] 2 containers: [06f55c9d89bb 71421ff8863d]
	I0826 04:15:08.603446    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:15:08.613308    4148 logs.go:276] 1 containers: [c0d71cf0e313]
	I0826 04:15:08.613378    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:15:08.624004    4148 logs.go:276] 2 containers: [7476edc3c059 3e4a8d1b968e]
	I0826 04:15:08.624070    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:15:08.639026    4148 logs.go:276] 0 containers: []
	W0826 04:15:08.639037    4148 logs.go:278] No container was found matching "kindnet"
	I0826 04:15:08.639093    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:15:08.649144    4148 logs.go:276] 1 containers: [0030970326bc]
	I0826 04:15:08.649159    4148 logs.go:123] Gathering logs for dmesg ...
	I0826 04:15:08.649165    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:15:08.653835    4148 logs.go:123] Gathering logs for etcd [db42efb0ce47] ...
	I0826 04:15:08.653842    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db42efb0ce47"
	I0826 04:15:08.670656    4148 logs.go:123] Gathering logs for kube-scheduler [06f55c9d89bb] ...
	I0826 04:15:08.670668    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f55c9d89bb"
	I0826 04:15:08.685150    4148 logs.go:123] Gathering logs for kube-scheduler [71421ff8863d] ...
	I0826 04:15:08.685163    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71421ff8863d"
	I0826 04:15:08.706658    4148 logs.go:123] Gathering logs for container status ...
	I0826 04:15:08.706669    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:15:08.718194    4148 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:15:08.718205    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:15:08.752783    4148 logs.go:123] Gathering logs for kube-apiserver [dbe421235bae] ...
	I0826 04:15:08.752798    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbe421235bae"
	I0826 04:15:08.766929    4148 logs.go:123] Gathering logs for kube-proxy [c0d71cf0e313] ...
	I0826 04:15:08.766940    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0d71cf0e313"
	I0826 04:15:08.778418    4148 logs.go:123] Gathering logs for kube-apiserver [685ee9b0ae9e] ...
	I0826 04:15:08.778430    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 685ee9b0ae9e"
	I0826 04:15:08.825325    4148 logs.go:123] Gathering logs for etcd [3c8dd03ee7d7] ...
	I0826 04:15:08.825343    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c8dd03ee7d7"
	I0826 04:15:08.839590    4148 logs.go:123] Gathering logs for kube-controller-manager [3e4a8d1b968e] ...
	I0826 04:15:08.839603    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e4a8d1b968e"
	I0826 04:15:08.852993    4148 logs.go:123] Gathering logs for Docker ...
	I0826 04:15:08.853005    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:15:08.878009    4148 logs.go:123] Gathering logs for kubelet ...
	I0826 04:15:08.878016    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:15:08.915982    4148 logs.go:123] Gathering logs for coredns [c4724eb6b6b4] ...
	I0826 04:15:08.915992    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4724eb6b6b4"
	I0826 04:15:08.926675    4148 logs.go:123] Gathering logs for kube-controller-manager [7476edc3c059] ...
	I0826 04:15:08.926685    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7476edc3c059"
	I0826 04:15:08.944736    4148 logs.go:123] Gathering logs for storage-provisioner [0030970326bc] ...
	I0826 04:15:08.944746    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0030970326bc"
	I0826 04:15:11.458647    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:15:16.459742    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:15:16.459901    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:15:16.473465    4148 logs.go:276] 2 containers: [dbe421235bae 685ee9b0ae9e]
	I0826 04:15:16.473539    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:15:16.484738    4148 logs.go:276] 2 containers: [3c8dd03ee7d7 db42efb0ce47]
	I0826 04:15:16.484810    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:15:16.495381    4148 logs.go:276] 1 containers: [c4724eb6b6b4]
	I0826 04:15:16.495444    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:15:16.505780    4148 logs.go:276] 2 containers: [06f55c9d89bb 71421ff8863d]
	I0826 04:15:16.505855    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:15:16.519748    4148 logs.go:276] 1 containers: [c0d71cf0e313]
	I0826 04:15:16.519818    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:15:16.530327    4148 logs.go:276] 2 containers: [7476edc3c059 3e4a8d1b968e]
	I0826 04:15:16.530391    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:15:16.539874    4148 logs.go:276] 0 containers: []
	W0826 04:15:16.539885    4148 logs.go:278] No container was found matching "kindnet"
	I0826 04:15:16.539942    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:15:16.550071    4148 logs.go:276] 1 containers: [0030970326bc]
	I0826 04:15:16.550092    4148 logs.go:123] Gathering logs for kube-apiserver [dbe421235bae] ...
	I0826 04:15:16.550097    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbe421235bae"
	I0826 04:15:16.563551    4148 logs.go:123] Gathering logs for etcd [3c8dd03ee7d7] ...
	I0826 04:15:16.563565    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c8dd03ee7d7"
	I0826 04:15:16.577473    4148 logs.go:123] Gathering logs for etcd [db42efb0ce47] ...
	I0826 04:15:16.577484    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db42efb0ce47"
	I0826 04:15:16.591986    4148 logs.go:123] Gathering logs for kube-scheduler [71421ff8863d] ...
	I0826 04:15:16.591998    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71421ff8863d"
	I0826 04:15:16.613239    4148 logs.go:123] Gathering logs for kube-controller-manager [3e4a8d1b968e] ...
	I0826 04:15:16.613251    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e4a8d1b968e"
	I0826 04:15:16.626266    4148 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:15:16.626279    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:15:16.665978    4148 logs.go:123] Gathering logs for kube-proxy [c0d71cf0e313] ...
	I0826 04:15:16.665991    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0d71cf0e313"
	I0826 04:15:16.677990    4148 logs.go:123] Gathering logs for kube-apiserver [685ee9b0ae9e] ...
	I0826 04:15:16.678001    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 685ee9b0ae9e"
	I0826 04:15:16.716899    4148 logs.go:123] Gathering logs for coredns [c4724eb6b6b4] ...
	I0826 04:15:16.716913    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4724eb6b6b4"
	I0826 04:15:16.728059    4148 logs.go:123] Gathering logs for kube-scheduler [06f55c9d89bb] ...
	I0826 04:15:16.728069    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f55c9d89bb"
	I0826 04:15:16.741714    4148 logs.go:123] Gathering logs for container status ...
	I0826 04:15:16.741725    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:15:16.753608    4148 logs.go:123] Gathering logs for kubelet ...
	I0826 04:15:16.753622    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:15:16.790283    4148 logs.go:123] Gathering logs for kube-controller-manager [7476edc3c059] ...
	I0826 04:15:16.790293    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7476edc3c059"
	I0826 04:15:16.808600    4148 logs.go:123] Gathering logs for storage-provisioner [0030970326bc] ...
	I0826 04:15:16.808612    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0030970326bc"
	I0826 04:15:16.820080    4148 logs.go:123] Gathering logs for Docker ...
	I0826 04:15:16.820091    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:15:16.845118    4148 logs.go:123] Gathering logs for dmesg ...
	I0826 04:15:16.845130    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:15:19.351567    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:15:24.354012    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:15:24.354149    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:15:24.365162    4148 logs.go:276] 2 containers: [dbe421235bae 685ee9b0ae9e]
	I0826 04:15:24.365226    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:15:24.375889    4148 logs.go:276] 2 containers: [3c8dd03ee7d7 db42efb0ce47]
	I0826 04:15:24.375958    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:15:24.386902    4148 logs.go:276] 1 containers: [c4724eb6b6b4]
	I0826 04:15:24.386966    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:15:24.397243    4148 logs.go:276] 2 containers: [06f55c9d89bb 71421ff8863d]
	I0826 04:15:24.397307    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:15:24.408620    4148 logs.go:276] 1 containers: [c0d71cf0e313]
	I0826 04:15:24.408680    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:15:24.419337    4148 logs.go:276] 2 containers: [7476edc3c059 3e4a8d1b968e]
	I0826 04:15:24.419391    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:15:24.433977    4148 logs.go:276] 0 containers: []
	W0826 04:15:24.433990    4148 logs.go:278] No container was found matching "kindnet"
	I0826 04:15:24.434047    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:15:24.444626    4148 logs.go:276] 1 containers: [0030970326bc]
	I0826 04:15:24.444644    4148 logs.go:123] Gathering logs for dmesg ...
	I0826 04:15:24.444652    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:15:24.449685    4148 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:15:24.449697    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:15:24.485646    4148 logs.go:123] Gathering logs for container status ...
	I0826 04:15:24.485661    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:15:24.500198    4148 logs.go:123] Gathering logs for kube-apiserver [685ee9b0ae9e] ...
	I0826 04:15:24.500210    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 685ee9b0ae9e"
	I0826 04:15:24.540674    4148 logs.go:123] Gathering logs for etcd [db42efb0ce47] ...
	I0826 04:15:24.540689    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db42efb0ce47"
	I0826 04:15:24.555159    4148 logs.go:123] Gathering logs for coredns [c4724eb6b6b4] ...
	I0826 04:15:24.555175    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4724eb6b6b4"
	I0826 04:15:24.570802    4148 logs.go:123] Gathering logs for Docker ...
	I0826 04:15:24.570813    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:15:24.595631    4148 logs.go:123] Gathering logs for kubelet ...
	I0826 04:15:24.595659    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:15:24.634040    4148 logs.go:123] Gathering logs for kube-apiserver [dbe421235bae] ...
	I0826 04:15:24.634050    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbe421235bae"
	I0826 04:15:24.649055    4148 logs.go:123] Gathering logs for etcd [3c8dd03ee7d7] ...
	I0826 04:15:24.649069    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c8dd03ee7d7"
	I0826 04:15:24.666009    4148 logs.go:123] Gathering logs for kube-scheduler [06f55c9d89bb] ...
	I0826 04:15:24.666023    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f55c9d89bb"
	I0826 04:15:24.680235    4148 logs.go:123] Gathering logs for kube-controller-manager [7476edc3c059] ...
	I0826 04:15:24.680248    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7476edc3c059"
	I0826 04:15:24.698249    4148 logs.go:123] Gathering logs for kube-controller-manager [3e4a8d1b968e] ...
	I0826 04:15:24.698261    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e4a8d1b968e"
	I0826 04:15:24.711467    4148 logs.go:123] Gathering logs for storage-provisioner [0030970326bc] ...
	I0826 04:15:24.711477    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0030970326bc"
	I0826 04:15:24.722767    4148 logs.go:123] Gathering logs for kube-scheduler [71421ff8863d] ...
	I0826 04:15:24.722781    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71421ff8863d"
	I0826 04:15:24.744005    4148 logs.go:123] Gathering logs for kube-proxy [c0d71cf0e313] ...
	I0826 04:15:24.744018    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0d71cf0e313"
	I0826 04:15:27.258390    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:15:32.260582    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:15:32.260706    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:15:32.274876    4148 logs.go:276] 2 containers: [dbe421235bae 685ee9b0ae9e]
	I0826 04:15:32.274945    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:15:32.289870    4148 logs.go:276] 2 containers: [3c8dd03ee7d7 db42efb0ce47]
	I0826 04:15:32.289940    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:15:32.300882    4148 logs.go:276] 1 containers: [c4724eb6b6b4]
	I0826 04:15:32.300944    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:15:32.311695    4148 logs.go:276] 2 containers: [06f55c9d89bb 71421ff8863d]
	I0826 04:15:32.311759    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:15:32.321939    4148 logs.go:276] 1 containers: [c0d71cf0e313]
	I0826 04:15:32.322012    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:15:32.333497    4148 logs.go:276] 2 containers: [7476edc3c059 3e4a8d1b968e]
	I0826 04:15:32.333585    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:15:32.343926    4148 logs.go:276] 0 containers: []
	W0826 04:15:32.343936    4148 logs.go:278] No container was found matching "kindnet"
	I0826 04:15:32.343991    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:15:32.361929    4148 logs.go:276] 1 containers: [0030970326bc]
	I0826 04:15:32.361952    4148 logs.go:123] Gathering logs for etcd [db42efb0ce47] ...
	I0826 04:15:32.361959    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db42efb0ce47"
	I0826 04:15:32.375939    4148 logs.go:123] Gathering logs for kubelet ...
	I0826 04:15:32.375952    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:15:32.413452    4148 logs.go:123] Gathering logs for dmesg ...
	I0826 04:15:32.413464    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:15:32.417954    4148 logs.go:123] Gathering logs for kube-scheduler [06f55c9d89bb] ...
	I0826 04:15:32.417963    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f55c9d89bb"
	I0826 04:15:32.431352    4148 logs.go:123] Gathering logs for Docker ...
	I0826 04:15:32.431366    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:15:32.454527    4148 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:15:32.454534    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:15:32.491758    4148 logs.go:123] Gathering logs for kube-apiserver [685ee9b0ae9e] ...
	I0826 04:15:32.491774    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 685ee9b0ae9e"
	I0826 04:15:32.551993    4148 logs.go:123] Gathering logs for etcd [3c8dd03ee7d7] ...
	I0826 04:15:32.552007    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c8dd03ee7d7"
	I0826 04:15:32.566271    4148 logs.go:123] Gathering logs for coredns [c4724eb6b6b4] ...
	I0826 04:15:32.566283    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4724eb6b6b4"
	I0826 04:15:32.582185    4148 logs.go:123] Gathering logs for kube-scheduler [71421ff8863d] ...
	I0826 04:15:32.582196    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71421ff8863d"
	I0826 04:15:32.606623    4148 logs.go:123] Gathering logs for kube-proxy [c0d71cf0e313] ...
	I0826 04:15:32.606636    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0d71cf0e313"
	I0826 04:15:32.618207    4148 logs.go:123] Gathering logs for kube-controller-manager [3e4a8d1b968e] ...
	I0826 04:15:32.618217    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e4a8d1b968e"
	I0826 04:15:32.631439    4148 logs.go:123] Gathering logs for container status ...
	I0826 04:15:32.631450    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:15:32.643356    4148 logs.go:123] Gathering logs for kube-apiserver [dbe421235bae] ...
	I0826 04:15:32.643368    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbe421235bae"
	I0826 04:15:32.658828    4148 logs.go:123] Gathering logs for kube-controller-manager [7476edc3c059] ...
	I0826 04:15:32.658842    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7476edc3c059"
	I0826 04:15:32.677328    4148 logs.go:123] Gathering logs for storage-provisioner [0030970326bc] ...
	I0826 04:15:32.677337    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0030970326bc"
	I0826 04:15:35.190979    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:15:40.192091    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:15:40.192223    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:15:40.204421    4148 logs.go:276] 2 containers: [dbe421235bae 685ee9b0ae9e]
	I0826 04:15:40.204491    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:15:40.215465    4148 logs.go:276] 2 containers: [3c8dd03ee7d7 db42efb0ce47]
	I0826 04:15:40.215536    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:15:40.226049    4148 logs.go:276] 1 containers: [c4724eb6b6b4]
	I0826 04:15:40.226114    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:15:40.236508    4148 logs.go:276] 2 containers: [06f55c9d89bb 71421ff8863d]
	I0826 04:15:40.236576    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:15:40.248069    4148 logs.go:276] 1 containers: [c0d71cf0e313]
	I0826 04:15:40.248138    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:15:40.266357    4148 logs.go:276] 2 containers: [7476edc3c059 3e4a8d1b968e]
	I0826 04:15:40.266426    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:15:40.277104    4148 logs.go:276] 0 containers: []
	W0826 04:15:40.277119    4148 logs.go:278] No container was found matching "kindnet"
	I0826 04:15:40.277180    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:15:40.288151    4148 logs.go:276] 1 containers: [0030970326bc]
	I0826 04:15:40.288170    4148 logs.go:123] Gathering logs for kube-proxy [c0d71cf0e313] ...
	I0826 04:15:40.288176    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0d71cf0e313"
	I0826 04:15:40.301822    4148 logs.go:123] Gathering logs for kube-controller-manager [7476edc3c059] ...
	I0826 04:15:40.301833    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7476edc3c059"
	I0826 04:15:40.319248    4148 logs.go:123] Gathering logs for kube-controller-manager [3e4a8d1b968e] ...
	I0826 04:15:40.319260    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e4a8d1b968e"
	I0826 04:15:40.333311    4148 logs.go:123] Gathering logs for etcd [db42efb0ce47] ...
	I0826 04:15:40.333322    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db42efb0ce47"
	I0826 04:15:40.353346    4148 logs.go:123] Gathering logs for coredns [c4724eb6b6b4] ...
	I0826 04:15:40.353357    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4724eb6b6b4"
	I0826 04:15:40.368031    4148 logs.go:123] Gathering logs for kube-scheduler [71421ff8863d] ...
	I0826 04:15:40.368041    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71421ff8863d"
	I0826 04:15:40.395486    4148 logs.go:123] Gathering logs for storage-provisioner [0030970326bc] ...
	I0826 04:15:40.395497    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0030970326bc"
	I0826 04:15:40.407116    4148 logs.go:123] Gathering logs for kubelet ...
	I0826 04:15:40.407127    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:15:40.444999    4148 logs.go:123] Gathering logs for kube-apiserver [dbe421235bae] ...
	I0826 04:15:40.445008    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbe421235bae"
	I0826 04:15:40.459106    4148 logs.go:123] Gathering logs for kube-apiserver [685ee9b0ae9e] ...
	I0826 04:15:40.459116    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 685ee9b0ae9e"
	I0826 04:15:40.496161    4148 logs.go:123] Gathering logs for etcd [3c8dd03ee7d7] ...
	I0826 04:15:40.496174    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c8dd03ee7d7"
	I0826 04:15:40.509813    4148 logs.go:123] Gathering logs for container status ...
	I0826 04:15:40.509823    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:15:40.521235    4148 logs.go:123] Gathering logs for dmesg ...
	I0826 04:15:40.521246    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:15:40.525636    4148 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:15:40.525643    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:15:40.569782    4148 logs.go:123] Gathering logs for kube-scheduler [06f55c9d89bb] ...
	I0826 04:15:40.569793    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f55c9d89bb"
	I0826 04:15:40.586016    4148 logs.go:123] Gathering logs for Docker ...
	I0826 04:15:40.586030    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:15:43.110236    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:15:48.112507    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:15:48.112591    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:15:48.123624    4148 logs.go:276] 2 containers: [dbe421235bae 685ee9b0ae9e]
	I0826 04:15:48.123694    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:15:48.133737    4148 logs.go:276] 2 containers: [3c8dd03ee7d7 db42efb0ce47]
	I0826 04:15:48.133810    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:15:48.147343    4148 logs.go:276] 1 containers: [c4724eb6b6b4]
	I0826 04:15:48.147412    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:15:48.157867    4148 logs.go:276] 2 containers: [06f55c9d89bb 71421ff8863d]
	I0826 04:15:48.157935    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:15:48.168686    4148 logs.go:276] 1 containers: [c0d71cf0e313]
	I0826 04:15:48.168751    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:15:48.179682    4148 logs.go:276] 2 containers: [7476edc3c059 3e4a8d1b968e]
	I0826 04:15:48.179758    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:15:48.189486    4148 logs.go:276] 0 containers: []
	W0826 04:15:48.189501    4148 logs.go:278] No container was found matching "kindnet"
	I0826 04:15:48.189561    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:15:48.199578    4148 logs.go:276] 1 containers: [0030970326bc]
	I0826 04:15:48.199597    4148 logs.go:123] Gathering logs for kubelet ...
	I0826 04:15:48.199603    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:15:48.236152    4148 logs.go:123] Gathering logs for dmesg ...
	I0826 04:15:48.236164    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:15:48.240575    4148 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:15:48.240584    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:15:48.280271    4148 logs.go:123] Gathering logs for kube-apiserver [dbe421235bae] ...
	I0826 04:15:48.280285    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbe421235bae"
	I0826 04:15:48.294448    4148 logs.go:123] Gathering logs for kube-apiserver [685ee9b0ae9e] ...
	I0826 04:15:48.294459    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 685ee9b0ae9e"
	I0826 04:15:48.339739    4148 logs.go:123] Gathering logs for etcd [3c8dd03ee7d7] ...
	I0826 04:15:48.339753    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c8dd03ee7d7"
	I0826 04:15:48.354322    4148 logs.go:123] Gathering logs for storage-provisioner [0030970326bc] ...
	I0826 04:15:48.354335    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0030970326bc"
	I0826 04:15:48.365720    4148 logs.go:123] Gathering logs for etcd [db42efb0ce47] ...
	I0826 04:15:48.365731    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db42efb0ce47"
	I0826 04:15:48.380155    4148 logs.go:123] Gathering logs for kube-scheduler [71421ff8863d] ...
	I0826 04:15:48.380165    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71421ff8863d"
	I0826 04:15:48.400804    4148 logs.go:123] Gathering logs for coredns [c4724eb6b6b4] ...
	I0826 04:15:48.400815    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4724eb6b6b4"
	I0826 04:15:48.412106    4148 logs.go:123] Gathering logs for kube-controller-manager [7476edc3c059] ...
	I0826 04:15:48.412117    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7476edc3c059"
	I0826 04:15:48.459142    4148 logs.go:123] Gathering logs for kube-scheduler [06f55c9d89bb] ...
	I0826 04:15:48.459153    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f55c9d89bb"
	I0826 04:15:48.473125    4148 logs.go:123] Gathering logs for kube-proxy [c0d71cf0e313] ...
	I0826 04:15:48.473137    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0d71cf0e313"
	I0826 04:15:48.484602    4148 logs.go:123] Gathering logs for kube-controller-manager [3e4a8d1b968e] ...
	I0826 04:15:48.484611    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e4a8d1b968e"
	I0826 04:15:48.498591    4148 logs.go:123] Gathering logs for Docker ...
	I0826 04:15:48.498599    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:15:48.522614    4148 logs.go:123] Gathering logs for container status ...
	I0826 04:15:48.522622    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:15:51.036300    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:15:56.037784    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:15:56.037884    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:15:56.049216    4148 logs.go:276] 2 containers: [dbe421235bae 685ee9b0ae9e]
	I0826 04:15:56.049281    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:15:56.060210    4148 logs.go:276] 2 containers: [3c8dd03ee7d7 db42efb0ce47]
	I0826 04:15:56.060279    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:15:56.071198    4148 logs.go:276] 1 containers: [c4724eb6b6b4]
	I0826 04:15:56.071270    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:15:56.082323    4148 logs.go:276] 2 containers: [06f55c9d89bb 71421ff8863d]
	I0826 04:15:56.082391    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:15:56.092798    4148 logs.go:276] 1 containers: [c0d71cf0e313]
	I0826 04:15:56.092873    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:15:56.103835    4148 logs.go:276] 2 containers: [7476edc3c059 3e4a8d1b968e]
	I0826 04:15:56.103906    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:15:56.114000    4148 logs.go:276] 0 containers: []
	W0826 04:15:56.114013    4148 logs.go:278] No container was found matching "kindnet"
	I0826 04:15:56.114068    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:15:56.128718    4148 logs.go:276] 1 containers: [0030970326bc]
	I0826 04:15:56.128734    4148 logs.go:123] Gathering logs for etcd [3c8dd03ee7d7] ...
	I0826 04:15:56.128740    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c8dd03ee7d7"
	I0826 04:15:56.142478    4148 logs.go:123] Gathering logs for etcd [db42efb0ce47] ...
	I0826 04:15:56.142492    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db42efb0ce47"
	I0826 04:15:56.157048    4148 logs.go:123] Gathering logs for coredns [c4724eb6b6b4] ...
	I0826 04:15:56.157061    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4724eb6b6b4"
	I0826 04:15:56.168805    4148 logs.go:123] Gathering logs for kube-scheduler [71421ff8863d] ...
	I0826 04:15:56.168817    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71421ff8863d"
	I0826 04:15:56.192654    4148 logs.go:123] Gathering logs for kube-proxy [c0d71cf0e313] ...
	I0826 04:15:56.192667    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0d71cf0e313"
	I0826 04:15:56.204193    4148 logs.go:123] Gathering logs for kube-controller-manager [3e4a8d1b968e] ...
	I0826 04:15:56.204204    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e4a8d1b968e"
	I0826 04:15:56.217077    4148 logs.go:123] Gathering logs for kubelet ...
	I0826 04:15:56.217089    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:15:56.258114    4148 logs.go:123] Gathering logs for kube-apiserver [685ee9b0ae9e] ...
	I0826 04:15:56.258124    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 685ee9b0ae9e"
	I0826 04:15:56.302019    4148 logs.go:123] Gathering logs for kube-controller-manager [7476edc3c059] ...
	I0826 04:15:56.302031    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7476edc3c059"
	I0826 04:15:56.326170    4148 logs.go:123] Gathering logs for container status ...
	I0826 04:15:56.326181    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:15:56.337940    4148 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:15:56.337954    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:15:56.373344    4148 logs.go:123] Gathering logs for Docker ...
	I0826 04:15:56.373357    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:15:56.396844    4148 logs.go:123] Gathering logs for dmesg ...
	I0826 04:15:56.396851    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:15:56.401069    4148 logs.go:123] Gathering logs for kube-apiserver [dbe421235bae] ...
	I0826 04:15:56.401077    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbe421235bae"
	I0826 04:15:56.415429    4148 logs.go:123] Gathering logs for kube-scheduler [06f55c9d89bb] ...
	I0826 04:15:56.415439    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f55c9d89bb"
	I0826 04:15:56.429311    4148 logs.go:123] Gathering logs for storage-provisioner [0030970326bc] ...
	I0826 04:15:56.429323    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0030970326bc"
	I0826 04:15:58.942675    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:16:03.943293    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:16:03.943376    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:16:03.954799    4148 logs.go:276] 2 containers: [dbe421235bae 685ee9b0ae9e]
	I0826 04:16:03.954878    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:16:03.969905    4148 logs.go:276] 2 containers: [3c8dd03ee7d7 db42efb0ce47]
	I0826 04:16:03.969973    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:16:03.981978    4148 logs.go:276] 1 containers: [c4724eb6b6b4]
	I0826 04:16:03.982055    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:16:03.993443    4148 logs.go:276] 2 containers: [06f55c9d89bb 71421ff8863d]
	I0826 04:16:03.993513    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:16:04.003982    4148 logs.go:276] 1 containers: [c0d71cf0e313]
	I0826 04:16:04.004047    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:16:04.014188    4148 logs.go:276] 2 containers: [7476edc3c059 3e4a8d1b968e]
	I0826 04:16:04.014258    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:16:04.024762    4148 logs.go:276] 0 containers: []
	W0826 04:16:04.024775    4148 logs.go:278] No container was found matching "kindnet"
	I0826 04:16:04.024837    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:16:04.035644    4148 logs.go:276] 1 containers: [0030970326bc]
	I0826 04:16:04.035660    4148 logs.go:123] Gathering logs for dmesg ...
	I0826 04:16:04.035666    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:16:04.040212    4148 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:16:04.040223    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:16:04.076317    4148 logs.go:123] Gathering logs for etcd [db42efb0ce47] ...
	I0826 04:16:04.076329    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db42efb0ce47"
	I0826 04:16:04.094635    4148 logs.go:123] Gathering logs for coredns [c4724eb6b6b4] ...
	I0826 04:16:04.094647    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4724eb6b6b4"
	I0826 04:16:04.105925    4148 logs.go:123] Gathering logs for kube-proxy [c0d71cf0e313] ...
	I0826 04:16:04.105936    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0d71cf0e313"
	I0826 04:16:04.118499    4148 logs.go:123] Gathering logs for kubelet ...
	I0826 04:16:04.118510    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:16:04.156647    4148 logs.go:123] Gathering logs for kube-apiserver [dbe421235bae] ...
	I0826 04:16:04.156660    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbe421235bae"
	I0826 04:16:04.170458    4148 logs.go:123] Gathering logs for kube-apiserver [685ee9b0ae9e] ...
	I0826 04:16:04.170468    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 685ee9b0ae9e"
	I0826 04:16:04.207581    4148 logs.go:123] Gathering logs for etcd [3c8dd03ee7d7] ...
	I0826 04:16:04.207594    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c8dd03ee7d7"
	I0826 04:16:04.221419    4148 logs.go:123] Gathering logs for kube-scheduler [06f55c9d89bb] ...
	I0826 04:16:04.221431    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f55c9d89bb"
	I0826 04:16:04.235545    4148 logs.go:123] Gathering logs for kube-scheduler [71421ff8863d] ...
	I0826 04:16:04.235557    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71421ff8863d"
	I0826 04:16:04.256751    4148 logs.go:123] Gathering logs for kube-controller-manager [3e4a8d1b968e] ...
	I0826 04:16:04.256762    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e4a8d1b968e"
	I0826 04:16:04.269869    4148 logs.go:123] Gathering logs for storage-provisioner [0030970326bc] ...
	I0826 04:16:04.269885    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0030970326bc"
	I0826 04:16:04.281727    4148 logs.go:123] Gathering logs for kube-controller-manager [7476edc3c059] ...
	I0826 04:16:04.281739    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7476edc3c059"
	I0826 04:16:04.299273    4148 logs.go:123] Gathering logs for Docker ...
	I0826 04:16:04.299288    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:16:04.324401    4148 logs.go:123] Gathering logs for container status ...
	I0826 04:16:04.324419    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:16:06.838075    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:16:11.840251    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:16:11.840336    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:16:11.852935    4148 logs.go:276] 2 containers: [dbe421235bae 685ee9b0ae9e]
	I0826 04:16:11.853011    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:16:11.867528    4148 logs.go:276] 2 containers: [3c8dd03ee7d7 db42efb0ce47]
	I0826 04:16:11.867613    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:16:11.878149    4148 logs.go:276] 1 containers: [c4724eb6b6b4]
	I0826 04:16:11.878227    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:16:11.889186    4148 logs.go:276] 2 containers: [06f55c9d89bb 71421ff8863d]
	I0826 04:16:11.889252    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:16:11.900697    4148 logs.go:276] 1 containers: [c0d71cf0e313]
	I0826 04:16:11.900769    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:16:11.912289    4148 logs.go:276] 2 containers: [7476edc3c059 3e4a8d1b968e]
	I0826 04:16:11.912364    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:16:11.923478    4148 logs.go:276] 0 containers: []
	W0826 04:16:11.923491    4148 logs.go:278] No container was found matching "kindnet"
	I0826 04:16:11.923556    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:16:11.934696    4148 logs.go:276] 1 containers: [0030970326bc]
	I0826 04:16:11.934712    4148 logs.go:123] Gathering logs for kubelet ...
	I0826 04:16:11.934717    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:16:11.972987    4148 logs.go:123] Gathering logs for kube-apiserver [685ee9b0ae9e] ...
	I0826 04:16:11.973003    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 685ee9b0ae9e"
	I0826 04:16:12.013705    4148 logs.go:123] Gathering logs for coredns [c4724eb6b6b4] ...
	I0826 04:16:12.013720    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4724eb6b6b4"
	I0826 04:16:12.026294    4148 logs.go:123] Gathering logs for kube-scheduler [06f55c9d89bb] ...
	I0826 04:16:12.026308    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f55c9d89bb"
	I0826 04:16:12.040464    4148 logs.go:123] Gathering logs for kube-controller-manager [3e4a8d1b968e] ...
	I0826 04:16:12.040474    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e4a8d1b968e"
	I0826 04:16:12.053592    4148 logs.go:123] Gathering logs for Docker ...
	I0826 04:16:12.053605    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:16:12.078318    4148 logs.go:123] Gathering logs for dmesg ...
	I0826 04:16:12.078329    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:16:12.082542    4148 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:16:12.082550    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:16:12.116270    4148 logs.go:123] Gathering logs for kube-apiserver [dbe421235bae] ...
	I0826 04:16:12.116282    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbe421235bae"
	I0826 04:16:12.130501    4148 logs.go:123] Gathering logs for etcd [db42efb0ce47] ...
	I0826 04:16:12.130514    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db42efb0ce47"
	I0826 04:16:12.145320    4148 logs.go:123] Gathering logs for kube-scheduler [71421ff8863d] ...
	I0826 04:16:12.145331    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71421ff8863d"
	I0826 04:16:12.166925    4148 logs.go:123] Gathering logs for container status ...
	I0826 04:16:12.166936    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:16:12.178730    4148 logs.go:123] Gathering logs for etcd [3c8dd03ee7d7] ...
	I0826 04:16:12.178745    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c8dd03ee7d7"
	I0826 04:16:12.192642    4148 logs.go:123] Gathering logs for kube-proxy [c0d71cf0e313] ...
	I0826 04:16:12.192653    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0d71cf0e313"
	I0826 04:16:12.204393    4148 logs.go:123] Gathering logs for kube-controller-manager [7476edc3c059] ...
	I0826 04:16:12.204407    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7476edc3c059"
	I0826 04:16:12.222286    4148 logs.go:123] Gathering logs for storage-provisioner [0030970326bc] ...
	I0826 04:16:12.222297    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0030970326bc"
	I0826 04:16:14.736265    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:16:19.738466    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0826 04:16:19.738549    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:16:19.750460    4148 logs.go:276] 2 containers: [dbe421235bae 685ee9b0ae9e]
	I0826 04:16:19.750547    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:16:19.762344    4148 logs.go:276] 2 containers: [3c8dd03ee7d7 db42efb0ce47]
	I0826 04:16:19.762423    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:16:19.774169    4148 logs.go:276] 1 containers: [c4724eb6b6b4]
	I0826 04:16:19.774241    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:16:19.786024    4148 logs.go:276] 2 containers: [06f55c9d89bb 71421ff8863d]
	I0826 04:16:19.786105    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:16:19.797766    4148 logs.go:276] 1 containers: [c0d71cf0e313]
	I0826 04:16:19.797835    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:16:19.809583    4148 logs.go:276] 2 containers: [7476edc3c059 3e4a8d1b968e]
	I0826 04:16:19.809653    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:16:19.826780    4148 logs.go:276] 0 containers: []
	W0826 04:16:19.826792    4148 logs.go:278] No container was found matching "kindnet"
	I0826 04:16:19.826856    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:16:19.855563    4148 logs.go:276] 1 containers: [0030970326bc]
	I0826 04:16:19.855583    4148 logs.go:123] Gathering logs for etcd [db42efb0ce47] ...
	I0826 04:16:19.855588    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db42efb0ce47"
	I0826 04:16:19.871281    4148 logs.go:123] Gathering logs for kube-scheduler [06f55c9d89bb] ...
	I0826 04:16:19.871292    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f55c9d89bb"
	I0826 04:16:19.886141    4148 logs.go:123] Gathering logs for kube-proxy [c0d71cf0e313] ...
	I0826 04:16:19.886154    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0d71cf0e313"
	I0826 04:16:19.898606    4148 logs.go:123] Gathering logs for kube-controller-manager [7476edc3c059] ...
	I0826 04:16:19.898619    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7476edc3c059"
	I0826 04:16:19.916054    4148 logs.go:123] Gathering logs for kubelet ...
	I0826 04:16:19.916066    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:16:19.954943    4148 logs.go:123] Gathering logs for kube-apiserver [685ee9b0ae9e] ...
	I0826 04:16:19.954958    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 685ee9b0ae9e"
	I0826 04:16:19.995052    4148 logs.go:123] Gathering logs for etcd [3c8dd03ee7d7] ...
	I0826 04:16:19.995068    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c8dd03ee7d7"
	I0826 04:16:20.009536    4148 logs.go:123] Gathering logs for coredns [c4724eb6b6b4] ...
	I0826 04:16:20.009547    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4724eb6b6b4"
	I0826 04:16:20.020861    4148 logs.go:123] Gathering logs for kube-scheduler [71421ff8863d] ...
	I0826 04:16:20.020872    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71421ff8863d"
	I0826 04:16:20.041878    4148 logs.go:123] Gathering logs for kube-controller-manager [3e4a8d1b968e] ...
	I0826 04:16:20.041888    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e4a8d1b968e"
	I0826 04:16:20.055563    4148 logs.go:123] Gathering logs for storage-provisioner [0030970326bc] ...
	I0826 04:16:20.055578    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0030970326bc"
	I0826 04:16:20.067249    4148 logs.go:123] Gathering logs for dmesg ...
	I0826 04:16:20.067261    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:16:20.071420    4148 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:16:20.071430    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:16:20.105146    4148 logs.go:123] Gathering logs for kube-apiserver [dbe421235bae] ...
	I0826 04:16:20.105160    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbe421235bae"
	I0826 04:16:20.118910    4148 logs.go:123] Gathering logs for Docker ...
	I0826 04:16:20.118925    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:16:20.141434    4148 logs.go:123] Gathering logs for container status ...
	I0826 04:16:20.141442    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:16:22.657701    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:16:27.660023    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:16:27.660161    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:16:27.672618    4148 logs.go:276] 2 containers: [dbe421235bae 685ee9b0ae9e]
	I0826 04:16:27.672690    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:16:27.683934    4148 logs.go:276] 2 containers: [3c8dd03ee7d7 db42efb0ce47]
	I0826 04:16:27.684010    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:16:27.695313    4148 logs.go:276] 1 containers: [c4724eb6b6b4]
	I0826 04:16:27.695394    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:16:27.712762    4148 logs.go:276] 2 containers: [06f55c9d89bb 71421ff8863d]
	I0826 04:16:27.712840    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:16:27.723939    4148 logs.go:276] 1 containers: [c0d71cf0e313]
	I0826 04:16:27.724014    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:16:27.740598    4148 logs.go:276] 2 containers: [7476edc3c059 3e4a8d1b968e]
	I0826 04:16:27.740673    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:16:27.753331    4148 logs.go:276] 0 containers: []
	W0826 04:16:27.753346    4148 logs.go:278] No container was found matching "kindnet"
	I0826 04:16:27.753410    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:16:27.765151    4148 logs.go:276] 1 containers: [0030970326bc]
	I0826 04:16:27.765170    4148 logs.go:123] Gathering logs for kubelet ...
	I0826 04:16:27.765176    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:16:27.804971    4148 logs.go:123] Gathering logs for kube-apiserver [685ee9b0ae9e] ...
	I0826 04:16:27.804990    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 685ee9b0ae9e"
	I0826 04:16:27.847615    4148 logs.go:123] Gathering logs for kube-proxy [c0d71cf0e313] ...
	I0826 04:16:27.847631    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0d71cf0e313"
	I0826 04:16:27.859854    4148 logs.go:123] Gathering logs for Docker ...
	I0826 04:16:27.859867    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:16:27.883924    4148 logs.go:123] Gathering logs for container status ...
	I0826 04:16:27.883936    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:16:27.898001    4148 logs.go:123] Gathering logs for etcd [db42efb0ce47] ...
	I0826 04:16:27.898014    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db42efb0ce47"
	I0826 04:16:27.913627    4148 logs.go:123] Gathering logs for coredns [c4724eb6b6b4] ...
	I0826 04:16:27.913642    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4724eb6b6b4"
	I0826 04:16:27.926865    4148 logs.go:123] Gathering logs for kube-controller-manager [3e4a8d1b968e] ...
	I0826 04:16:27.926877    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e4a8d1b968e"
	I0826 04:16:27.941971    4148 logs.go:123] Gathering logs for dmesg ...
	I0826 04:16:27.941983    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:16:27.946843    4148 logs.go:123] Gathering logs for kube-apiserver [dbe421235bae] ...
	I0826 04:16:27.946852    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbe421235bae"
	I0826 04:16:27.967765    4148 logs.go:123] Gathering logs for etcd [3c8dd03ee7d7] ...
	I0826 04:16:27.967774    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c8dd03ee7d7"
	I0826 04:16:27.982552    4148 logs.go:123] Gathering logs for kube-scheduler [06f55c9d89bb] ...
	I0826 04:16:27.982564    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f55c9d89bb"
	I0826 04:16:28.003352    4148 logs.go:123] Gathering logs for kube-controller-manager [7476edc3c059] ...
	I0826 04:16:28.003363    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7476edc3c059"
	I0826 04:16:28.021929    4148 logs.go:123] Gathering logs for storage-provisioner [0030970326bc] ...
	I0826 04:16:28.021942    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0030970326bc"
	I0826 04:16:28.034231    4148 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:16:28.034244    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:16:28.071009    4148 logs.go:123] Gathering logs for kube-scheduler [71421ff8863d] ...
	I0826 04:16:28.071021    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71421ff8863d"
	I0826 04:16:30.598936    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:16:35.601323    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:16:35.601440    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:16:35.628038    4148 logs.go:276] 2 containers: [dbe421235bae 685ee9b0ae9e]
	I0826 04:16:35.628114    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:16:35.641187    4148 logs.go:276] 2 containers: [3c8dd03ee7d7 db42efb0ce47]
	I0826 04:16:35.641258    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:16:35.652906    4148 logs.go:276] 1 containers: [c4724eb6b6b4]
	I0826 04:16:35.652964    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:16:35.665494    4148 logs.go:276] 2 containers: [06f55c9d89bb 71421ff8863d]
	I0826 04:16:35.665567    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:16:35.679418    4148 logs.go:276] 1 containers: [c0d71cf0e313]
	I0826 04:16:35.679491    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:16:35.690977    4148 logs.go:276] 2 containers: [7476edc3c059 3e4a8d1b968e]
	I0826 04:16:35.691047    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:16:35.701948    4148 logs.go:276] 0 containers: []
	W0826 04:16:35.701959    4148 logs.go:278] No container was found matching "kindnet"
	I0826 04:16:35.702020    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:16:35.714515    4148 logs.go:276] 1 containers: [0030970326bc]
	I0826 04:16:35.714534    4148 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:16:35.714540    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:16:35.752011    4148 logs.go:123] Gathering logs for kube-apiserver [dbe421235bae] ...
	I0826 04:16:35.752027    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbe421235bae"
	I0826 04:16:35.768164    4148 logs.go:123] Gathering logs for etcd [3c8dd03ee7d7] ...
	I0826 04:16:35.768174    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c8dd03ee7d7"
	I0826 04:16:35.782545    4148 logs.go:123] Gathering logs for kube-scheduler [06f55c9d89bb] ...
	I0826 04:16:35.782559    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f55c9d89bb"
	I0826 04:16:35.797990    4148 logs.go:123] Gathering logs for kube-scheduler [71421ff8863d] ...
	I0826 04:16:35.798004    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71421ff8863d"
	I0826 04:16:35.824997    4148 logs.go:123] Gathering logs for etcd [db42efb0ce47] ...
	I0826 04:16:35.825013    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db42efb0ce47"
	I0826 04:16:35.840741    4148 logs.go:123] Gathering logs for dmesg ...
	I0826 04:16:35.840756    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:16:35.846114    4148 logs.go:123] Gathering logs for kube-apiserver [685ee9b0ae9e] ...
	I0826 04:16:35.846122    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 685ee9b0ae9e"
	I0826 04:16:35.886141    4148 logs.go:123] Gathering logs for coredns [c4724eb6b6b4] ...
	I0826 04:16:35.886154    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4724eb6b6b4"
	I0826 04:16:35.898888    4148 logs.go:123] Gathering logs for kube-proxy [c0d71cf0e313] ...
	I0826 04:16:35.898901    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0d71cf0e313"
	I0826 04:16:35.911938    4148 logs.go:123] Gathering logs for storage-provisioner [0030970326bc] ...
	I0826 04:16:35.911952    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0030970326bc"
	I0826 04:16:35.925820    4148 logs.go:123] Gathering logs for container status ...
	I0826 04:16:35.925832    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:16:35.941380    4148 logs.go:123] Gathering logs for kubelet ...
	I0826 04:16:35.941389    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:16:35.982086    4148 logs.go:123] Gathering logs for kube-controller-manager [7476edc3c059] ...
	I0826 04:16:35.982096    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7476edc3c059"
	I0826 04:16:36.000651    4148 logs.go:123] Gathering logs for kube-controller-manager [3e4a8d1b968e] ...
	I0826 04:16:36.000659    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e4a8d1b968e"
	I0826 04:16:36.015234    4148 logs.go:123] Gathering logs for Docker ...
	I0826 04:16:36.015247    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:16:38.541914    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:16:43.544631    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:16:43.545069    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:16:43.599774    4148 logs.go:276] 2 containers: [dbe421235bae 685ee9b0ae9e]
	I0826 04:16:43.599873    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:16:43.616757    4148 logs.go:276] 2 containers: [3c8dd03ee7d7 db42efb0ce47]
	I0826 04:16:43.616807    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:16:43.635238    4148 logs.go:276] 1 containers: [c4724eb6b6b4]
	I0826 04:16:43.635294    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:16:43.646903    4148 logs.go:276] 2 containers: [06f55c9d89bb 71421ff8863d]
	I0826 04:16:43.646982    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:16:43.657831    4148 logs.go:276] 1 containers: [c0d71cf0e313]
	I0826 04:16:43.657885    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:16:43.670740    4148 logs.go:276] 2 containers: [7476edc3c059 3e4a8d1b968e]
	I0826 04:16:43.670807    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:16:43.683713    4148 logs.go:276] 0 containers: []
	W0826 04:16:43.683725    4148 logs.go:278] No container was found matching "kindnet"
	I0826 04:16:43.683786    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:16:43.696540    4148 logs.go:276] 1 containers: [0030970326bc]
	I0826 04:16:43.696564    4148 logs.go:123] Gathering logs for kubelet ...
	I0826 04:16:43.696570    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:16:43.737442    4148 logs.go:123] Gathering logs for etcd [db42efb0ce47] ...
	I0826 04:16:43.737454    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db42efb0ce47"
	I0826 04:16:43.753467    4148 logs.go:123] Gathering logs for kube-scheduler [06f55c9d89bb] ...
	I0826 04:16:43.753479    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f55c9d89bb"
	I0826 04:16:43.769274    4148 logs.go:123] Gathering logs for storage-provisioner [0030970326bc] ...
	I0826 04:16:43.769291    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0030970326bc"
	I0826 04:16:43.782160    4148 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:16:43.782174    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:16:43.819225    4148 logs.go:123] Gathering logs for etcd [3c8dd03ee7d7] ...
	I0826 04:16:43.819240    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c8dd03ee7d7"
	I0826 04:16:43.834054    4148 logs.go:123] Gathering logs for coredns [c4724eb6b6b4] ...
	I0826 04:16:43.834067    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4724eb6b6b4"
	I0826 04:16:43.846127    4148 logs.go:123] Gathering logs for kube-scheduler [71421ff8863d] ...
	I0826 04:16:43.846141    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71421ff8863d"
	I0826 04:16:43.868812    4148 logs.go:123] Gathering logs for kube-proxy [c0d71cf0e313] ...
	I0826 04:16:43.868822    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0d71cf0e313"
	I0826 04:16:43.881742    4148 logs.go:123] Gathering logs for Docker ...
	I0826 04:16:43.881767    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:16:43.906608    4148 logs.go:123] Gathering logs for dmesg ...
	I0826 04:16:43.906622    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:16:43.910987    4148 logs.go:123] Gathering logs for kube-apiserver [dbe421235bae] ...
	I0826 04:16:43.911000    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbe421235bae"
	I0826 04:16:43.925395    4148 logs.go:123] Gathering logs for kube-apiserver [685ee9b0ae9e] ...
	I0826 04:16:43.925407    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 685ee9b0ae9e"
	I0826 04:16:43.968619    4148 logs.go:123] Gathering logs for kube-controller-manager [7476edc3c059] ...
	I0826 04:16:43.968637    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7476edc3c059"
	I0826 04:16:43.989671    4148 logs.go:123] Gathering logs for kube-controller-manager [3e4a8d1b968e] ...
	I0826 04:16:43.989689    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e4a8d1b968e"
	I0826 04:16:44.004114    4148 logs.go:123] Gathering logs for container status ...
	I0826 04:16:44.004123    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:16:46.526593    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:16:51.528530    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:16:51.528931    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:16:51.569243    4148 logs.go:276] 2 containers: [dbe421235bae 685ee9b0ae9e]
	I0826 04:16:51.569418    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:16:51.598449    4148 logs.go:276] 2 containers: [3c8dd03ee7d7 db42efb0ce47]
	I0826 04:16:51.598524    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:16:51.613059    4148 logs.go:276] 1 containers: [c4724eb6b6b4]
	I0826 04:16:51.613100    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:16:51.625555    4148 logs.go:276] 2 containers: [06f55c9d89bb 71421ff8863d]
	I0826 04:16:51.625587    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:16:51.637313    4148 logs.go:276] 1 containers: [c0d71cf0e313]
	I0826 04:16:51.637351    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:16:51.648698    4148 logs.go:276] 2 containers: [7476edc3c059 3e4a8d1b968e]
	I0826 04:16:51.648737    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:16:51.665075    4148 logs.go:276] 0 containers: []
	W0826 04:16:51.665087    4148 logs.go:278] No container was found matching "kindnet"
	I0826 04:16:51.665146    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:16:51.681767    4148 logs.go:276] 1 containers: [0030970326bc]
	I0826 04:16:51.681787    4148 logs.go:123] Gathering logs for coredns [c4724eb6b6b4] ...
	I0826 04:16:51.681793    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4724eb6b6b4"
	I0826 04:16:51.694358    4148 logs.go:123] Gathering logs for kube-scheduler [71421ff8863d] ...
	I0826 04:16:51.694372    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71421ff8863d"
	I0826 04:16:51.725874    4148 logs.go:123] Gathering logs for kube-controller-manager [7476edc3c059] ...
	I0826 04:16:51.725891    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7476edc3c059"
	I0826 04:16:51.744074    4148 logs.go:123] Gathering logs for storage-provisioner [0030970326bc] ...
	I0826 04:16:51.744087    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0030970326bc"
	I0826 04:16:51.757130    4148 logs.go:123] Gathering logs for kubelet ...
	I0826 04:16:51.757142    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:16:51.798211    4148 logs.go:123] Gathering logs for kube-apiserver [dbe421235bae] ...
	I0826 04:16:51.798226    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbe421235bae"
	I0826 04:16:51.816751    4148 logs.go:123] Gathering logs for kube-scheduler [06f55c9d89bb] ...
	I0826 04:16:51.816764    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f55c9d89bb"
	I0826 04:16:51.832330    4148 logs.go:123] Gathering logs for kube-proxy [c0d71cf0e313] ...
	I0826 04:16:51.832342    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0d71cf0e313"
	I0826 04:16:51.844375    4148 logs.go:123] Gathering logs for kube-controller-manager [3e4a8d1b968e] ...
	I0826 04:16:51.844384    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e4a8d1b968e"
	I0826 04:16:51.857865    4148 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:16:51.857877    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:16:51.893561    4148 logs.go:123] Gathering logs for kube-apiserver [685ee9b0ae9e] ...
	I0826 04:16:51.893572    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 685ee9b0ae9e"
	I0826 04:16:51.934642    4148 logs.go:123] Gathering logs for etcd [3c8dd03ee7d7] ...
	I0826 04:16:51.934660    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c8dd03ee7d7"
	I0826 04:16:51.950095    4148 logs.go:123] Gathering logs for container status ...
	I0826 04:16:51.950109    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:16:51.965398    4148 logs.go:123] Gathering logs for dmesg ...
	I0826 04:16:51.965413    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:16:51.971024    4148 logs.go:123] Gathering logs for Docker ...
	I0826 04:16:51.971036    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:16:51.996200    4148 logs.go:123] Gathering logs for etcd [db42efb0ce47] ...
	I0826 04:16:51.996211    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db42efb0ce47"
	I0826 04:16:54.516284    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:16:59.518629    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:16:59.519066    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:16:59.558084    4148 logs.go:276] 2 containers: [dbe421235bae 685ee9b0ae9e]
	I0826 04:16:59.558222    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:16:59.579702    4148 logs.go:276] 2 containers: [3c8dd03ee7d7 db42efb0ce47]
	I0826 04:16:59.579805    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:16:59.595999    4148 logs.go:276] 1 containers: [c4724eb6b6b4]
	I0826 04:16:59.596078    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:16:59.612439    4148 logs.go:276] 2 containers: [06f55c9d89bb 71421ff8863d]
	I0826 04:16:59.612507    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:16:59.627403    4148 logs.go:276] 1 containers: [c0d71cf0e313]
	I0826 04:16:59.627478    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:16:59.639470    4148 logs.go:276] 2 containers: [7476edc3c059 3e4a8d1b968e]
	I0826 04:16:59.639503    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:16:59.651247    4148 logs.go:276] 0 containers: []
	W0826 04:16:59.651254    4148 logs.go:278] No container was found matching "kindnet"
	I0826 04:16:59.651290    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:16:59.663983    4148 logs.go:276] 1 containers: [0030970326bc]
	I0826 04:16:59.663996    4148 logs.go:123] Gathering logs for etcd [db42efb0ce47] ...
	I0826 04:16:59.664001    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db42efb0ce47"
	I0826 04:16:59.683619    4148 logs.go:123] Gathering logs for kube-apiserver [685ee9b0ae9e] ...
	I0826 04:16:59.683634    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 685ee9b0ae9e"
	I0826 04:16:59.728199    4148 logs.go:123] Gathering logs for kube-apiserver [dbe421235bae] ...
	I0826 04:16:59.728214    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbe421235bae"
	I0826 04:16:59.742768    4148 logs.go:123] Gathering logs for coredns [c4724eb6b6b4] ...
	I0826 04:16:59.742777    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4724eb6b6b4"
	I0826 04:16:59.754765    4148 logs.go:123] Gathering logs for kube-scheduler [06f55c9d89bb] ...
	I0826 04:16:59.754774    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f55c9d89bb"
	I0826 04:16:59.769608    4148 logs.go:123] Gathering logs for kube-controller-manager [3e4a8d1b968e] ...
	I0826 04:16:59.769622    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e4a8d1b968e"
	I0826 04:16:59.782944    4148 logs.go:123] Gathering logs for storage-provisioner [0030970326bc] ...
	I0826 04:16:59.782958    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0030970326bc"
	I0826 04:16:59.799628    4148 logs.go:123] Gathering logs for Docker ...
	I0826 04:16:59.799638    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:16:59.823530    4148 logs.go:123] Gathering logs for container status ...
	I0826 04:16:59.823546    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:16:59.836936    4148 logs.go:123] Gathering logs for dmesg ...
	I0826 04:16:59.836950    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:16:59.841762    4148 logs.go:123] Gathering logs for kube-scheduler [71421ff8863d] ...
	I0826 04:16:59.841770    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71421ff8863d"
	I0826 04:16:59.863657    4148 logs.go:123] Gathering logs for etcd [3c8dd03ee7d7] ...
	I0826 04:16:59.863675    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c8dd03ee7d7"
	I0826 04:16:59.884094    4148 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:16:59.884112    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:16:59.928951    4148 logs.go:123] Gathering logs for kube-proxy [c0d71cf0e313] ...
	I0826 04:16:59.928968    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0d71cf0e313"
	I0826 04:16:59.942606    4148 logs.go:123] Gathering logs for kube-controller-manager [7476edc3c059] ...
	I0826 04:16:59.942622    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7476edc3c059"
	I0826 04:16:59.960388    4148 logs.go:123] Gathering logs for kubelet ...
	I0826 04:16:59.960401    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:17:02.502050    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:17:07.504251    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:17:07.504439    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:17:07.522131    4148 logs.go:276] 2 containers: [dbe421235bae 685ee9b0ae9e]
	I0826 04:17:07.522229    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:17:07.538726    4148 logs.go:276] 2 containers: [3c8dd03ee7d7 db42efb0ce47]
	I0826 04:17:07.538805    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:17:07.549911    4148 logs.go:276] 1 containers: [c4724eb6b6b4]
	I0826 04:17:07.549971    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:17:07.561660    4148 logs.go:276] 2 containers: [06f55c9d89bb 71421ff8863d]
	I0826 04:17:07.561730    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:17:07.573078    4148 logs.go:276] 1 containers: [c0d71cf0e313]
	I0826 04:17:07.573150    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:17:07.584715    4148 logs.go:276] 2 containers: [7476edc3c059 3e4a8d1b968e]
	I0826 04:17:07.584782    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:17:07.596021    4148 logs.go:276] 0 containers: []
	W0826 04:17:07.596039    4148 logs.go:278] No container was found matching "kindnet"
	I0826 04:17:07.596098    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:17:07.607269    4148 logs.go:276] 1 containers: [0030970326bc]
	I0826 04:17:07.607284    4148 logs.go:123] Gathering logs for kubelet ...
	I0826 04:17:07.607289    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:17:07.648525    4148 logs.go:123] Gathering logs for etcd [3c8dd03ee7d7] ...
	I0826 04:17:07.648545    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c8dd03ee7d7"
	I0826 04:17:07.668281    4148 logs.go:123] Gathering logs for etcd [db42efb0ce47] ...
	I0826 04:17:07.668296    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db42efb0ce47"
	I0826 04:17:07.686275    4148 logs.go:123] Gathering logs for kube-scheduler [71421ff8863d] ...
	I0826 04:17:07.686286    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71421ff8863d"
	I0826 04:17:07.709831    4148 logs.go:123] Gathering logs for kube-proxy [c0d71cf0e313] ...
	I0826 04:17:07.709843    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0d71cf0e313"
	I0826 04:17:07.722583    4148 logs.go:123] Gathering logs for dmesg ...
	I0826 04:17:07.722595    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:17:07.726831    4148 logs.go:123] Gathering logs for Docker ...
	I0826 04:17:07.726844    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:17:07.750544    4148 logs.go:123] Gathering logs for container status ...
	I0826 04:17:07.750558    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:17:07.766185    4148 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:17:07.766197    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:17:07.804370    4148 logs.go:123] Gathering logs for kube-apiserver [685ee9b0ae9e] ...
	I0826 04:17:07.804384    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 685ee9b0ae9e"
	I0826 04:17:07.844888    4148 logs.go:123] Gathering logs for coredns [c4724eb6b6b4] ...
	I0826 04:17:07.844907    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4724eb6b6b4"
	I0826 04:17:07.857439    4148 logs.go:123] Gathering logs for kube-scheduler [06f55c9d89bb] ...
	I0826 04:17:07.857455    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f55c9d89bb"
	I0826 04:17:07.872604    4148 logs.go:123] Gathering logs for kube-controller-manager [7476edc3c059] ...
	I0826 04:17:07.872616    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7476edc3c059"
	I0826 04:17:07.891084    4148 logs.go:123] Gathering logs for kube-controller-manager [3e4a8d1b968e] ...
	I0826 04:17:07.891094    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e4a8d1b968e"
	I0826 04:17:07.905514    4148 logs.go:123] Gathering logs for kube-apiserver [dbe421235bae] ...
	I0826 04:17:07.905526    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbe421235bae"
	I0826 04:17:07.920844    4148 logs.go:123] Gathering logs for storage-provisioner [0030970326bc] ...
	I0826 04:17:07.920855    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0030970326bc"
	I0826 04:17:10.434879    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:17:15.437194    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:17:15.437629    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:17:15.479271    4148 logs.go:276] 2 containers: [dbe421235bae 685ee9b0ae9e]
	I0826 04:17:15.479404    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:17:15.501594    4148 logs.go:276] 2 containers: [3c8dd03ee7d7 db42efb0ce47]
	I0826 04:17:15.501655    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:17:15.517262    4148 logs.go:276] 1 containers: [c4724eb6b6b4]
	I0826 04:17:15.517339    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:17:15.530210    4148 logs.go:276] 2 containers: [06f55c9d89bb 71421ff8863d]
	I0826 04:17:15.530287    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:17:15.542183    4148 logs.go:276] 1 containers: [c0d71cf0e313]
	I0826 04:17:15.542259    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:17:15.554288    4148 logs.go:276] 2 containers: [7476edc3c059 3e4a8d1b968e]
	I0826 04:17:15.554325    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:17:15.569112    4148 logs.go:276] 0 containers: []
	W0826 04:17:15.569124    4148 logs.go:278] No container was found matching "kindnet"
	I0826 04:17:15.569182    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:17:15.581065    4148 logs.go:276] 1 containers: [0030970326bc]
	I0826 04:17:15.581163    4148 logs.go:123] Gathering logs for etcd [3c8dd03ee7d7] ...
	I0826 04:17:15.581207    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c8dd03ee7d7"
	I0826 04:17:15.596921    4148 logs.go:123] Gathering logs for coredns [c4724eb6b6b4] ...
	I0826 04:17:15.596931    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4724eb6b6b4"
	I0826 04:17:15.609191    4148 logs.go:123] Gathering logs for kube-scheduler [06f55c9d89bb] ...
	I0826 04:17:15.609202    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f55c9d89bb"
	I0826 04:17:15.624921    4148 logs.go:123] Gathering logs for kube-scheduler [71421ff8863d] ...
	I0826 04:17:15.624937    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71421ff8863d"
	I0826 04:17:15.646794    4148 logs.go:123] Gathering logs for kube-proxy [c0d71cf0e313] ...
	I0826 04:17:15.646805    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0d71cf0e313"
	I0826 04:17:15.663689    4148 logs.go:123] Gathering logs for kube-controller-manager [3e4a8d1b968e] ...
	I0826 04:17:15.663699    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e4a8d1b968e"
	I0826 04:17:15.679923    4148 logs.go:123] Gathering logs for storage-provisioner [0030970326bc] ...
	I0826 04:17:15.679935    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0030970326bc"
	I0826 04:17:15.693672    4148 logs.go:123] Gathering logs for kubelet ...
	I0826 04:17:15.693683    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:17:15.732378    4148 logs.go:123] Gathering logs for dmesg ...
	I0826 04:17:15.732396    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:17:15.736982    4148 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:17:15.736992    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:17:15.776084    4148 logs.go:123] Gathering logs for kube-apiserver [685ee9b0ae9e] ...
	I0826 04:17:15.776097    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 685ee9b0ae9e"
	I0826 04:17:15.817262    4148 logs.go:123] Gathering logs for kube-controller-manager [7476edc3c059] ...
	I0826 04:17:15.817273    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7476edc3c059"
	I0826 04:17:15.835621    4148 logs.go:123] Gathering logs for container status ...
	I0826 04:17:15.835632    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:17:15.849713    4148 logs.go:123] Gathering logs for kube-apiserver [dbe421235bae] ...
	I0826 04:17:15.849738    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbe421235bae"
	I0826 04:17:15.865584    4148 logs.go:123] Gathering logs for etcd [db42efb0ce47] ...
	I0826 04:17:15.865597    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db42efb0ce47"
	I0826 04:17:15.888998    4148 logs.go:123] Gathering logs for Docker ...
	I0826 04:17:15.889015    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:17:18.414501    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:17:23.415262    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:17:23.415465    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:17:23.427567    4148 logs.go:276] 2 containers: [dbe421235bae 685ee9b0ae9e]
	I0826 04:17:23.427636    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:17:23.439647    4148 logs.go:276] 2 containers: [3c8dd03ee7d7 db42efb0ce47]
	I0826 04:17:23.439715    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:17:23.450178    4148 logs.go:276] 1 containers: [c4724eb6b6b4]
	I0826 04:17:23.450240    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:17:23.463106    4148 logs.go:276] 2 containers: [06f55c9d89bb 71421ff8863d]
	I0826 04:17:23.463143    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:17:23.474219    4148 logs.go:276] 1 containers: [c0d71cf0e313]
	I0826 04:17:23.474293    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:17:23.485638    4148 logs.go:276] 2 containers: [7476edc3c059 3e4a8d1b968e]
	I0826 04:17:23.485704    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:17:23.503044    4148 logs.go:276] 0 containers: []
	W0826 04:17:23.503056    4148 logs.go:278] No container was found matching "kindnet"
	I0826 04:17:23.503094    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:17:23.514818    4148 logs.go:276] 1 containers: [0030970326bc]
	I0826 04:17:23.514835    4148 logs.go:123] Gathering logs for kube-controller-manager [7476edc3c059] ...
	I0826 04:17:23.514841    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7476edc3c059"
	I0826 04:17:23.536768    4148 logs.go:123] Gathering logs for kube-controller-manager [3e4a8d1b968e] ...
	I0826 04:17:23.536776    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e4a8d1b968e"
	I0826 04:17:23.551043    4148 logs.go:123] Gathering logs for container status ...
	I0826 04:17:23.551054    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:17:23.563542    4148 logs.go:123] Gathering logs for kube-apiserver [dbe421235bae] ...
	I0826 04:17:23.563552    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbe421235bae"
	I0826 04:17:23.578471    4148 logs.go:123] Gathering logs for kube-apiserver [685ee9b0ae9e] ...
	I0826 04:17:23.578479    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 685ee9b0ae9e"
	I0826 04:17:23.618877    4148 logs.go:123] Gathering logs for Docker ...
	I0826 04:17:23.618890    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:17:23.644819    4148 logs.go:123] Gathering logs for etcd [3c8dd03ee7d7] ...
	I0826 04:17:23.644833    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c8dd03ee7d7"
	I0826 04:17:23.661767    4148 logs.go:123] Gathering logs for storage-provisioner [0030970326bc] ...
	I0826 04:17:23.661778    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0030970326bc"
	I0826 04:17:23.678612    4148 logs.go:123] Gathering logs for kube-scheduler [06f55c9d89bb] ...
	I0826 04:17:23.678624    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f55c9d89bb"
	I0826 04:17:23.693590    4148 logs.go:123] Gathering logs for dmesg ...
	I0826 04:17:23.693600    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:17:23.698055    4148 logs.go:123] Gathering logs for coredns [c4724eb6b6b4] ...
	I0826 04:17:23.698062    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4724eb6b6b4"
	I0826 04:17:23.710185    4148 logs.go:123] Gathering logs for etcd [db42efb0ce47] ...
	I0826 04:17:23.710197    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db42efb0ce47"
	I0826 04:17:23.726098    4148 logs.go:123] Gathering logs for kube-scheduler [71421ff8863d] ...
	I0826 04:17:23.726115    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71421ff8863d"
	I0826 04:17:23.748493    4148 logs.go:123] Gathering logs for kube-proxy [c0d71cf0e313] ...
	I0826 04:17:23.748504    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0d71cf0e313"
	I0826 04:17:23.763290    4148 logs.go:123] Gathering logs for kubelet ...
	I0826 04:17:23.763301    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:17:23.801926    4148 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:17:23.801936    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:17:26.340086    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:17:31.342348    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:17:31.342723    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:17:31.383826    4148 logs.go:276] 2 containers: [dbe421235bae 685ee9b0ae9e]
	I0826 04:17:31.383916    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:17:31.404473    4148 logs.go:276] 2 containers: [3c8dd03ee7d7 db42efb0ce47]
	I0826 04:17:31.404553    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:17:31.423584    4148 logs.go:276] 1 containers: [c4724eb6b6b4]
	I0826 04:17:31.423655    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:17:31.435807    4148 logs.go:276] 2 containers: [06f55c9d89bb 71421ff8863d]
	I0826 04:17:31.435882    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:17:31.458984    4148 logs.go:276] 1 containers: [c0d71cf0e313]
	I0826 04:17:31.459049    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:17:31.470507    4148 logs.go:276] 2 containers: [7476edc3c059 3e4a8d1b968e]
	I0826 04:17:31.470583    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:17:31.481206    4148 logs.go:276] 0 containers: []
	W0826 04:17:31.481216    4148 logs.go:278] No container was found matching "kindnet"
	I0826 04:17:31.481253    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:17:31.494281    4148 logs.go:276] 1 containers: [0030970326bc]
	I0826 04:17:31.494298    4148 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:17:31.494303    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:17:31.535689    4148 logs.go:123] Gathering logs for kube-apiserver [dbe421235bae] ...
	I0826 04:17:31.535705    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbe421235bae"
	I0826 04:17:31.550718    4148 logs.go:123] Gathering logs for etcd [db42efb0ce47] ...
	I0826 04:17:31.550734    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db42efb0ce47"
	I0826 04:17:31.567224    4148 logs.go:123] Gathering logs for kubelet ...
	I0826 04:17:31.567235    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:17:31.607373    4148 logs.go:123] Gathering logs for etcd [3c8dd03ee7d7] ...
	I0826 04:17:31.607382    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c8dd03ee7d7"
	I0826 04:17:31.622419    4148 logs.go:123] Gathering logs for coredns [c4724eb6b6b4] ...
	I0826 04:17:31.622430    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4724eb6b6b4"
	I0826 04:17:31.634533    4148 logs.go:123] Gathering logs for kube-scheduler [06f55c9d89bb] ...
	I0826 04:17:31.634545    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f55c9d89bb"
	I0826 04:17:31.649488    4148 logs.go:123] Gathering logs for kube-scheduler [71421ff8863d] ...
	I0826 04:17:31.649505    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71421ff8863d"
	I0826 04:17:31.672425    4148 logs.go:123] Gathering logs for container status ...
	I0826 04:17:31.672436    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:17:31.686719    4148 logs.go:123] Gathering logs for dmesg ...
	I0826 04:17:31.686729    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:17:31.691286    4148 logs.go:123] Gathering logs for kube-controller-manager [7476edc3c059] ...
	I0826 04:17:31.691298    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7476edc3c059"
	I0826 04:17:31.710885    4148 logs.go:123] Gathering logs for kube-controller-manager [3e4a8d1b968e] ...
	I0826 04:17:31.710896    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e4a8d1b968e"
	I0826 04:17:31.725040    4148 logs.go:123] Gathering logs for storage-provisioner [0030970326bc] ...
	I0826 04:17:31.725054    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0030970326bc"
	I0826 04:17:31.737305    4148 logs.go:123] Gathering logs for kube-apiserver [685ee9b0ae9e] ...
	I0826 04:17:31.737319    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 685ee9b0ae9e"
	I0826 04:17:31.780928    4148 logs.go:123] Gathering logs for kube-proxy [c0d71cf0e313] ...
	I0826 04:17:31.780940    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0d71cf0e313"
	I0826 04:17:31.793512    4148 logs.go:123] Gathering logs for Docker ...
	I0826 04:17:31.793523    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:17:34.319173    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:17:39.321527    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:17:39.321669    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:17:39.332280    4148 logs.go:276] 2 containers: [dbe421235bae 685ee9b0ae9e]
	I0826 04:17:39.332357    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:17:39.343347    4148 logs.go:276] 2 containers: [3c8dd03ee7d7 db42efb0ce47]
	I0826 04:17:39.343420    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:17:39.354210    4148 logs.go:276] 1 containers: [c4724eb6b6b4]
	I0826 04:17:39.354275    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:17:39.365245    4148 logs.go:276] 2 containers: [06f55c9d89bb 71421ff8863d]
	I0826 04:17:39.365315    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:17:39.376136    4148 logs.go:276] 1 containers: [c0d71cf0e313]
	I0826 04:17:39.376176    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:17:39.387270    4148 logs.go:276] 2 containers: [7476edc3c059 3e4a8d1b968e]
	I0826 04:17:39.387343    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:17:39.399144    4148 logs.go:276] 0 containers: []
	W0826 04:17:39.399156    4148 logs.go:278] No container was found matching "kindnet"
	I0826 04:17:39.399217    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:17:39.416125    4148 logs.go:276] 1 containers: [0030970326bc]
	I0826 04:17:39.416143    4148 logs.go:123] Gathering logs for kube-apiserver [685ee9b0ae9e] ...
	I0826 04:17:39.416149    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 685ee9b0ae9e"
	I0826 04:17:39.456713    4148 logs.go:123] Gathering logs for kube-proxy [c0d71cf0e313] ...
	I0826 04:17:39.456726    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0d71cf0e313"
	I0826 04:17:39.473818    4148 logs.go:123] Gathering logs for dmesg ...
	I0826 04:17:39.473830    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:17:39.478254    4148 logs.go:123] Gathering logs for kube-apiserver [dbe421235bae] ...
	I0826 04:17:39.478266    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbe421235bae"
	I0826 04:17:39.493609    4148 logs.go:123] Gathering logs for etcd [db42efb0ce47] ...
	I0826 04:17:39.493618    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db42efb0ce47"
	I0826 04:17:39.508652    4148 logs.go:123] Gathering logs for kube-scheduler [71421ff8863d] ...
	I0826 04:17:39.508665    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71421ff8863d"
	I0826 04:17:39.531490    4148 logs.go:123] Gathering logs for Docker ...
	I0826 04:17:39.531502    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:17:39.556073    4148 logs.go:123] Gathering logs for storage-provisioner [0030970326bc] ...
	I0826 04:17:39.556089    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0030970326bc"
	I0826 04:17:39.568953    4148 logs.go:123] Gathering logs for container status ...
	I0826 04:17:39.568967    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:17:39.581564    4148 logs.go:123] Gathering logs for kubelet ...
	I0826 04:17:39.581580    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:17:39.621692    4148 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:17:39.621706    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:17:39.661402    4148 logs.go:123] Gathering logs for etcd [3c8dd03ee7d7] ...
	I0826 04:17:39.661418    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c8dd03ee7d7"
	I0826 04:17:39.683346    4148 logs.go:123] Gathering logs for coredns [c4724eb6b6b4] ...
	I0826 04:17:39.683360    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4724eb6b6b4"
	I0826 04:17:39.696423    4148 logs.go:123] Gathering logs for kube-controller-manager [7476edc3c059] ...
	I0826 04:17:39.696440    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7476edc3c059"
	I0826 04:17:39.720193    4148 logs.go:123] Gathering logs for kube-scheduler [06f55c9d89bb] ...
	I0826 04:17:39.720207    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f55c9d89bb"
	I0826 04:17:39.735615    4148 logs.go:123] Gathering logs for kube-controller-manager [3e4a8d1b968e] ...
	I0826 04:17:39.735629    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e4a8d1b968e"
	I0826 04:17:42.251350    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:17:47.252727    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:17:47.252899    4148 kubeadm.go:597] duration metric: took 4m4.517731292s to restartPrimaryControlPlane
	W0826 04:17:47.253031    4148 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0826 04:17:47.253091    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0826 04:17:48.320944    4148 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.067854084s)
	I0826 04:17:48.321012    4148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 04:17:48.326698    4148 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 04:17:48.329426    4148 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 04:17:48.332274    4148 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 04:17:48.332281    4148 kubeadm.go:157] found existing configuration files:
	
	I0826 04:17:48.332307    4148 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50261 /etc/kubernetes/admin.conf
	I0826 04:17:48.335191    4148 kubeadm.go:163] "https://control-plane.minikube.internal:50261" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50261 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 04:17:48.335217    4148 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 04:17:48.337561    4148 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50261 /etc/kubernetes/kubelet.conf
	I0826 04:17:48.340370    4148 kubeadm.go:163] "https://control-plane.minikube.internal:50261" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50261 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 04:17:48.340406    4148 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 04:17:48.343488    4148 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50261 /etc/kubernetes/controller-manager.conf
	I0826 04:17:48.345873    4148 kubeadm.go:163] "https://control-plane.minikube.internal:50261" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50261 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 04:17:48.345897    4148 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 04:17:48.348771    4148 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50261 /etc/kubernetes/scheduler.conf
	I0826 04:17:48.351654    4148 kubeadm.go:163] "https://control-plane.minikube.internal:50261" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50261 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 04:17:48.351675    4148 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 04:17:48.354309    4148 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0826 04:17:48.372306    4148 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0826 04:17:48.372335    4148 kubeadm.go:310] [preflight] Running pre-flight checks
	I0826 04:17:48.426617    4148 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0826 04:17:48.426676    4148 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0826 04:17:48.426722    4148 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0826 04:17:48.477963    4148 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0826 04:17:48.482090    4148 out.go:235]   - Generating certificates and keys ...
	I0826 04:17:48.482187    4148 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0826 04:17:48.482326    4148 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0826 04:17:48.482449    4148 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0826 04:17:48.482525    4148 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0826 04:17:48.482569    4148 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0826 04:17:48.482629    4148 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0826 04:17:48.482670    4148 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0826 04:17:48.482735    4148 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0826 04:17:48.482798    4148 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0826 04:17:48.482913    4148 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0826 04:17:48.482988    4148 kubeadm.go:310] [certs] Using the existing "sa" key
	I0826 04:17:48.483016    4148 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0826 04:17:48.528147    4148 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0826 04:17:48.685212    4148 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0826 04:17:48.761670    4148 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0826 04:17:48.843852    4148 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0826 04:17:48.877174    4148 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0826 04:17:48.877739    4148 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0826 04:17:48.877760    4148 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0826 04:17:48.964605    4148 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0826 04:17:48.972753    4148 out.go:235]   - Booting up control plane ...
	I0826 04:17:48.972806    4148 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0826 04:17:48.972845    4148 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0826 04:17:48.972931    4148 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0826 04:17:48.973306    4148 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0826 04:17:48.974092    4148 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0826 04:17:53.476727    4148 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502445 seconds
	I0826 04:17:53.476794    4148 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0826 04:17:53.481114    4148 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0826 04:17:54.003214    4148 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0826 04:17:54.003623    4148 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-743000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0826 04:17:54.509379    4148 kubeadm.go:310] [bootstrap-token] Using token: cd0dhc.72e6ot1tlu90jlrf
	I0826 04:17:54.513690    4148 out.go:235]   - Configuring RBAC rules ...
	I0826 04:17:54.513771    4148 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0826 04:17:54.516296    4148 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0826 04:17:54.522407    4148 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0826 04:17:54.523559    4148 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0826 04:17:54.524724    4148 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0826 04:17:54.525990    4148 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0826 04:17:54.530155    4148 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0826 04:17:54.705875    4148 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0826 04:17:54.917951    4148 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0826 04:17:54.918475    4148 kubeadm.go:310] 
	I0826 04:17:54.918505    4148 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0826 04:17:54.918508    4148 kubeadm.go:310] 
	I0826 04:17:54.918567    4148 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0826 04:17:54.918575    4148 kubeadm.go:310] 
	I0826 04:17:54.918603    4148 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0826 04:17:54.918637    4148 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0826 04:17:54.918668    4148 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0826 04:17:54.918674    4148 kubeadm.go:310] 
	I0826 04:17:54.918701    4148 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0826 04:17:54.918704    4148 kubeadm.go:310] 
	I0826 04:17:54.918730    4148 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0826 04:17:54.918737    4148 kubeadm.go:310] 
	I0826 04:17:54.918770    4148 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0826 04:17:54.918810    4148 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0826 04:17:54.918850    4148 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0826 04:17:54.918855    4148 kubeadm.go:310] 
	I0826 04:17:54.918907    4148 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0826 04:17:54.918949    4148 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0826 04:17:54.918953    4148 kubeadm.go:310] 
	I0826 04:17:54.918993    4148 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token cd0dhc.72e6ot1tlu90jlrf \
	I0826 04:17:54.919049    4148 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d48d9f38c6f791d9f71a5057d26eee89e43d0c7594d65171e1ecdad9babf1cb8 \
	I0826 04:17:54.919059    4148 kubeadm.go:310] 	--control-plane 
	I0826 04:17:54.919063    4148 kubeadm.go:310] 
	I0826 04:17:54.919110    4148 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0826 04:17:54.919116    4148 kubeadm.go:310] 
	I0826 04:17:54.919155    4148 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token cd0dhc.72e6ot1tlu90jlrf \
	I0826 04:17:54.919206    4148 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d48d9f38c6f791d9f71a5057d26eee89e43d0c7594d65171e1ecdad9babf1cb8 
	I0826 04:17:54.919254    4148 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0826 04:17:54.919309    4148 cni.go:84] Creating CNI manager for ""
	I0826 04:17:54.919318    4148 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0826 04:17:54.923237    4148 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0826 04:17:54.927208    4148 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0826 04:17:54.931693    4148 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0826 04:17:54.936585    4148 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0826 04:17:54.936632    4148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 04:17:54.936670    4148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-743000 minikube.k8s.io/updated_at=2024_08_26T04_17_54_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff minikube.k8s.io/name=stopped-upgrade-743000 minikube.k8s.io/primary=true
	I0826 04:17:54.976991    4148 kubeadm.go:1113] duration metric: took 40.401875ms to wait for elevateKubeSystemPrivileges
	I0826 04:17:54.977000    4148 ops.go:34] apiserver oom_adj: -16
	I0826 04:17:54.977011    4148 kubeadm.go:394] duration metric: took 4m12.255701958s to StartCluster
	I0826 04:17:54.977022    4148 settings.go:142] acquiring lock: {Name:mk86204df15f9319a81c6b97808047ffc9e01022 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 04:17:54.977106    4148 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19501-1045/kubeconfig
	I0826 04:17:54.977475    4148 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19501-1045/kubeconfig: {Name:mk689667536e8273d65b27bdc18d08f46d2d09b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 04:17:54.977676    4148 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0826 04:17:54.977729    4148 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0826 04:17:54.977770    4148 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-743000"
	I0826 04:17:54.977771    4148 config.go:182] Loaded profile config "stopped-upgrade-743000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0826 04:17:54.977782    4148 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-743000"
	W0826 04:17:54.977787    4148 addons.go:243] addon storage-provisioner should already be in state true
	I0826 04:17:54.977796    4148 host.go:66] Checking if "stopped-upgrade-743000" exists ...
	I0826 04:17:54.977806    4148 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-743000"
	I0826 04:17:54.977823    4148 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-743000"
	I0826 04:17:54.978796    4148 kapi.go:59] client config for stopped-upgrade-743000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/stopped-upgrade-743000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/stopped-upgrade-743000/client.key", CAFile:"/Users/jenkins/minikube-integration/19501-1045/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103b93d30), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0826 04:17:54.978926    4148 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-743000"
	W0826 04:17:54.978931    4148 addons.go:243] addon default-storageclass should already be in state true
	I0826 04:17:54.978938    4148 host.go:66] Checking if "stopped-upgrade-743000" exists ...
	I0826 04:17:54.982154    4148 out.go:177] * Verifying Kubernetes components...
	I0826 04:17:54.982456    4148 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0826 04:17:54.986622    4148 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0826 04:17:54.986630    4148 sshutil.go:53] new ssh client: &{IP:localhost Port:50229 SSHKeyPath:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/stopped-upgrade-743000/id_rsa Username:docker}
	I0826 04:17:54.990190    4148 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 04:17:54.994162    4148 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 04:17:54.998211    4148 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 04:17:54.998217    4148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0826 04:17:54.998222    4148 sshutil.go:53] new ssh client: &{IP:localhost Port:50229 SSHKeyPath:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/stopped-upgrade-743000/id_rsa Username:docker}
	I0826 04:17:55.078636    4148 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 04:17:55.084235    4148 api_server.go:52] waiting for apiserver process to appear ...
	I0826 04:17:55.084278    4148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 04:17:55.088492    4148 api_server.go:72] duration metric: took 110.805ms to wait for apiserver process to appear ...
	I0826 04:17:55.088500    4148 api_server.go:88] waiting for apiserver healthz status ...
	I0826 04:17:55.088507    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:17:55.113619    4148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0826 04:17:55.143407    4148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 04:17:55.506404    4148 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0826 04:17:55.506417    4148 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0826 04:18:00.089913    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:18:00.089968    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:18:05.090408    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:18:05.090434    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:18:10.090787    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:18:10.090813    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:18:15.091352    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:18:15.091375    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:18:20.091823    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:18:20.091873    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:18:25.092554    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:18:25.092618    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0826 04:18:25.508321    4148 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0826 04:18:25.512682    4148 out.go:177] * Enabled addons: storage-provisioner
	I0826 04:18:25.521556    4148 addons.go:510] duration metric: took 30.544339583s for enable addons: enabled=[storage-provisioner]
	I0826 04:18:30.093532    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:18:30.093584    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:18:35.094641    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:18:35.094691    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:18:40.096091    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:18:40.096162    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:18:45.098032    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:18:45.098054    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:18:50.099751    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:18:50.099801    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:18:55.100544    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:18:55.100686    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:18:55.111548    4148 logs.go:276] 1 containers: [6ff8d511b9ee]
	I0826 04:18:55.111621    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:18:55.123352    4148 logs.go:276] 1 containers: [bcbc2a012fc7]
	I0826 04:18:55.123428    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:18:55.134045    4148 logs.go:276] 2 containers: [a2d9258c2ed6 fd26afc6c747]
	I0826 04:18:55.134109    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:18:55.144664    4148 logs.go:276] 1 containers: [d278e2463601]
	I0826 04:18:55.144733    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:18:55.154941    4148 logs.go:276] 1 containers: [ccf3e861a584]
	I0826 04:18:55.155015    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:18:55.165466    4148 logs.go:276] 1 containers: [72b91c706799]
	I0826 04:18:55.165540    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:18:55.175871    4148 logs.go:276] 0 containers: []
	W0826 04:18:55.175882    4148 logs.go:278] No container was found matching "kindnet"
	I0826 04:18:55.175941    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:18:55.186855    4148 logs.go:276] 1 containers: [efce6badf459]
	I0826 04:18:55.186870    4148 logs.go:123] Gathering logs for kubelet ...
	I0826 04:18:55.186876    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:18:55.221195    4148 logs.go:123] Gathering logs for dmesg ...
	I0826 04:18:55.221205    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:18:55.228584    4148 logs.go:123] Gathering logs for etcd [bcbc2a012fc7] ...
	I0826 04:18:55.228594    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcbc2a012fc7"
	I0826 04:18:55.247634    4148 logs.go:123] Gathering logs for coredns [fd26afc6c747] ...
	I0826 04:18:55.247645    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd26afc6c747"
	I0826 04:18:55.259599    4148 logs.go:123] Gathering logs for kube-scheduler [d278e2463601] ...
	I0826 04:18:55.259610    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d278e2463601"
	I0826 04:18:55.275154    4148 logs.go:123] Gathering logs for kube-proxy [ccf3e861a584] ...
	I0826 04:18:55.275164    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccf3e861a584"
	I0826 04:18:55.286631    4148 logs.go:123] Gathering logs for storage-provisioner [efce6badf459] ...
	I0826 04:18:55.286641    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efce6badf459"
	I0826 04:18:55.300524    4148 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:18:55.300538    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:18:55.339710    4148 logs.go:123] Gathering logs for kube-apiserver [6ff8d511b9ee] ...
	I0826 04:18:55.339722    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff8d511b9ee"
	I0826 04:18:55.354262    4148 logs.go:123] Gathering logs for coredns [a2d9258c2ed6] ...
	I0826 04:18:55.354275    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d9258c2ed6"
	I0826 04:18:55.365798    4148 logs.go:123] Gathering logs for kube-controller-manager [72b91c706799] ...
	I0826 04:18:55.365809    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72b91c706799"
	I0826 04:18:55.385182    4148 logs.go:123] Gathering logs for Docker ...
	I0826 04:18:55.385195    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:18:55.410682    4148 logs.go:123] Gathering logs for container status ...
	I0826 04:18:55.410693    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:18:57.922863    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:19:02.925120    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:19:02.925226    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:19:02.936809    4148 logs.go:276] 1 containers: [6ff8d511b9ee]
	I0826 04:19:02.936884    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:19:02.947409    4148 logs.go:276] 1 containers: [bcbc2a012fc7]
	I0826 04:19:02.947477    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:19:02.959339    4148 logs.go:276] 2 containers: [a2d9258c2ed6 fd26afc6c747]
	I0826 04:19:02.959422    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:19:02.969791    4148 logs.go:276] 1 containers: [d278e2463601]
	I0826 04:19:02.969857    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:19:02.980471    4148 logs.go:276] 1 containers: [ccf3e861a584]
	I0826 04:19:02.980531    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:19:02.990988    4148 logs.go:276] 1 containers: [72b91c706799]
	I0826 04:19:02.991053    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:19:03.000748    4148 logs.go:276] 0 containers: []
	W0826 04:19:03.000761    4148 logs.go:278] No container was found matching "kindnet"
	I0826 04:19:03.000817    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:19:03.010841    4148 logs.go:276] 1 containers: [efce6badf459]
	I0826 04:19:03.010857    4148 logs.go:123] Gathering logs for kube-proxy [ccf3e861a584] ...
	I0826 04:19:03.010864    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccf3e861a584"
	I0826 04:19:03.028369    4148 logs.go:123] Gathering logs for kube-controller-manager [72b91c706799] ...
	I0826 04:19:03.028380    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72b91c706799"
	I0826 04:19:03.045018    4148 logs.go:123] Gathering logs for storage-provisioner [efce6badf459] ...
	I0826 04:19:03.045029    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efce6badf459"
	I0826 04:19:03.056520    4148 logs.go:123] Gathering logs for Docker ...
	I0826 04:19:03.056532    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:19:03.082283    4148 logs.go:123] Gathering logs for container status ...
	I0826 04:19:03.082292    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:19:03.094773    4148 logs.go:123] Gathering logs for dmesg ...
	I0826 04:19:03.094787    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:19:03.098970    4148 logs.go:123] Gathering logs for kube-apiserver [6ff8d511b9ee] ...
	I0826 04:19:03.098977    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff8d511b9ee"
	I0826 04:19:03.112875    4148 logs.go:123] Gathering logs for coredns [fd26afc6c747] ...
	I0826 04:19:03.112886    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd26afc6c747"
	I0826 04:19:03.124301    4148 logs.go:123] Gathering logs for coredns [a2d9258c2ed6] ...
	I0826 04:19:03.124312    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d9258c2ed6"
	I0826 04:19:03.137371    4148 logs.go:123] Gathering logs for kube-scheduler [d278e2463601] ...
	I0826 04:19:03.137385    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d278e2463601"
	I0826 04:19:03.152086    4148 logs.go:123] Gathering logs for kubelet ...
	I0826 04:19:03.152099    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:19:03.185636    4148 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:19:03.185647    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:19:03.220798    4148 logs.go:123] Gathering logs for etcd [bcbc2a012fc7] ...
	I0826 04:19:03.220813    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcbc2a012fc7"
	I0826 04:19:05.737417    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:19:10.739876    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:19:10.740062    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:19:10.763060    4148 logs.go:276] 1 containers: [6ff8d511b9ee]
	I0826 04:19:10.763148    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:19:10.777980    4148 logs.go:276] 1 containers: [bcbc2a012fc7]
	I0826 04:19:10.778053    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:19:10.789451    4148 logs.go:276] 2 containers: [a2d9258c2ed6 fd26afc6c747]
	I0826 04:19:10.789515    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:19:10.800138    4148 logs.go:276] 1 containers: [d278e2463601]
	I0826 04:19:10.800209    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:19:10.810000    4148 logs.go:276] 1 containers: [ccf3e861a584]
	I0826 04:19:10.810068    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:19:10.820205    4148 logs.go:276] 1 containers: [72b91c706799]
	I0826 04:19:10.820268    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:19:10.830621    4148 logs.go:276] 0 containers: []
	W0826 04:19:10.830633    4148 logs.go:278] No container was found matching "kindnet"
	I0826 04:19:10.830693    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:19:10.841436    4148 logs.go:276] 1 containers: [efce6badf459]
	I0826 04:19:10.841451    4148 logs.go:123] Gathering logs for kube-proxy [ccf3e861a584] ...
	I0826 04:19:10.841456    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccf3e861a584"
	I0826 04:19:10.852969    4148 logs.go:123] Gathering logs for container status ...
	I0826 04:19:10.852979    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:19:10.864806    4148 logs.go:123] Gathering logs for kube-apiserver [6ff8d511b9ee] ...
	I0826 04:19:10.864816    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff8d511b9ee"
	I0826 04:19:10.879197    4148 logs.go:123] Gathering logs for etcd [bcbc2a012fc7] ...
	I0826 04:19:10.879208    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcbc2a012fc7"
	I0826 04:19:10.892845    4148 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:19:10.892855    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:19:10.930904    4148 logs.go:123] Gathering logs for coredns [a2d9258c2ed6] ...
	I0826 04:19:10.930915    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d9258c2ed6"
	I0826 04:19:10.942026    4148 logs.go:123] Gathering logs for coredns [fd26afc6c747] ...
	I0826 04:19:10.942037    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd26afc6c747"
	I0826 04:19:10.954014    4148 logs.go:123] Gathering logs for kube-scheduler [d278e2463601] ...
	I0826 04:19:10.954025    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d278e2463601"
	I0826 04:19:10.968785    4148 logs.go:123] Gathering logs for kube-controller-manager [72b91c706799] ...
	I0826 04:19:10.968797    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72b91c706799"
	I0826 04:19:10.986723    4148 logs.go:123] Gathering logs for storage-provisioner [efce6badf459] ...
	I0826 04:19:10.986732    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efce6badf459"
	I0826 04:19:10.997958    4148 logs.go:123] Gathering logs for kubelet ...
	I0826 04:19:10.997972    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:19:11.031318    4148 logs.go:123] Gathering logs for dmesg ...
	I0826 04:19:11.031348    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:19:11.035348    4148 logs.go:123] Gathering logs for Docker ...
	I0826 04:19:11.035355    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:19:13.562303    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:19:18.564546    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:19:18.564787    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:19:18.586378    4148 logs.go:276] 1 containers: [6ff8d511b9ee]
	I0826 04:19:18.586478    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:19:18.601267    4148 logs.go:276] 1 containers: [bcbc2a012fc7]
	I0826 04:19:18.601341    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:19:18.614857    4148 logs.go:276] 2 containers: [a2d9258c2ed6 fd26afc6c747]
	I0826 04:19:18.614926    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:19:18.625599    4148 logs.go:276] 1 containers: [d278e2463601]
	I0826 04:19:18.625665    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:19:18.636424    4148 logs.go:276] 1 containers: [ccf3e861a584]
	I0826 04:19:18.636495    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:19:18.646704    4148 logs.go:276] 1 containers: [72b91c706799]
	I0826 04:19:18.646772    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:19:18.657421    4148 logs.go:276] 0 containers: []
	W0826 04:19:18.657435    4148 logs.go:278] No container was found matching "kindnet"
	I0826 04:19:18.657494    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:19:18.667996    4148 logs.go:276] 1 containers: [efce6badf459]
	I0826 04:19:18.668013    4148 logs.go:123] Gathering logs for kubelet ...
	I0826 04:19:18.668018    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:19:18.702445    4148 logs.go:123] Gathering logs for kube-apiserver [6ff8d511b9ee] ...
	I0826 04:19:18.702454    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff8d511b9ee"
	I0826 04:19:18.717029    4148 logs.go:123] Gathering logs for etcd [bcbc2a012fc7] ...
	I0826 04:19:18.717042    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcbc2a012fc7"
	I0826 04:19:18.731140    4148 logs.go:123] Gathering logs for coredns [a2d9258c2ed6] ...
	I0826 04:19:18.731154    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d9258c2ed6"
	I0826 04:19:18.746970    4148 logs.go:123] Gathering logs for coredns [fd26afc6c747] ...
	I0826 04:19:18.746984    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd26afc6c747"
	I0826 04:19:18.758551    4148 logs.go:123] Gathering logs for kube-scheduler [d278e2463601] ...
	I0826 04:19:18.758565    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d278e2463601"
	I0826 04:19:18.773398    4148 logs.go:123] Gathering logs for storage-provisioner [efce6badf459] ...
	I0826 04:19:18.773412    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efce6badf459"
	I0826 04:19:18.786709    4148 logs.go:123] Gathering logs for Docker ...
	I0826 04:19:18.786737    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:19:18.811632    4148 logs.go:123] Gathering logs for container status ...
	I0826 04:19:18.811645    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:19:18.823883    4148 logs.go:123] Gathering logs for dmesg ...
	I0826 04:19:18.823894    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:19:18.828538    4148 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:19:18.828550    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:19:18.863973    4148 logs.go:123] Gathering logs for kube-proxy [ccf3e861a584] ...
	I0826 04:19:18.863985    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccf3e861a584"
	I0826 04:19:18.875891    4148 logs.go:123] Gathering logs for kube-controller-manager [72b91c706799] ...
	I0826 04:19:18.875905    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72b91c706799"
	I0826 04:19:21.395685    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:19:26.397900    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:19:26.398097    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:19:26.416190    4148 logs.go:276] 1 containers: [6ff8d511b9ee]
	I0826 04:19:26.416284    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:19:26.436998    4148 logs.go:276] 1 containers: [bcbc2a012fc7]
	I0826 04:19:26.437068    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:19:26.448102    4148 logs.go:276] 2 containers: [a2d9258c2ed6 fd26afc6c747]
	I0826 04:19:26.448166    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:19:26.459726    4148 logs.go:276] 1 containers: [d278e2463601]
	I0826 04:19:26.459813    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:19:26.471769    4148 logs.go:276] 1 containers: [ccf3e861a584]
	I0826 04:19:26.471839    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:19:26.483069    4148 logs.go:276] 1 containers: [72b91c706799]
	I0826 04:19:26.483139    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:19:26.493907    4148 logs.go:276] 0 containers: []
	W0826 04:19:26.493917    4148 logs.go:278] No container was found matching "kindnet"
	I0826 04:19:26.493969    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:19:26.505416    4148 logs.go:276] 1 containers: [efce6badf459]
	I0826 04:19:26.505431    4148 logs.go:123] Gathering logs for dmesg ...
	I0826 04:19:26.505437    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:19:26.510265    4148 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:19:26.510271    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:19:26.544834    4148 logs.go:123] Gathering logs for kube-apiserver [6ff8d511b9ee] ...
	I0826 04:19:26.544845    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff8d511b9ee"
	I0826 04:19:26.559465    4148 logs.go:123] Gathering logs for coredns [a2d9258c2ed6] ...
	I0826 04:19:26.559476    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d9258c2ed6"
	I0826 04:19:26.571646    4148 logs.go:123] Gathering logs for Docker ...
	I0826 04:19:26.571660    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:19:26.597424    4148 logs.go:123] Gathering logs for kubelet ...
	I0826 04:19:26.597433    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:19:26.632373    4148 logs.go:123] Gathering logs for coredns [fd26afc6c747] ...
	I0826 04:19:26.632382    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd26afc6c747"
	I0826 04:19:26.644141    4148 logs.go:123] Gathering logs for kube-scheduler [d278e2463601] ...
	I0826 04:19:26.644150    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d278e2463601"
	I0826 04:19:26.659775    4148 logs.go:123] Gathering logs for kube-proxy [ccf3e861a584] ...
	I0826 04:19:26.659789    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccf3e861a584"
	I0826 04:19:26.671950    4148 logs.go:123] Gathering logs for kube-controller-manager [72b91c706799] ...
	I0826 04:19:26.671964    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72b91c706799"
	I0826 04:19:26.690112    4148 logs.go:123] Gathering logs for storage-provisioner [efce6badf459] ...
	I0826 04:19:26.690123    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efce6badf459"
	I0826 04:19:26.701761    4148 logs.go:123] Gathering logs for container status ...
	I0826 04:19:26.701773    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:19:26.713649    4148 logs.go:123] Gathering logs for etcd [bcbc2a012fc7] ...
	I0826 04:19:26.713661    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcbc2a012fc7"
	I0826 04:19:29.230930    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:19:34.233086    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:19:34.233292    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:19:34.248109    4148 logs.go:276] 1 containers: [6ff8d511b9ee]
	I0826 04:19:34.248188    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:19:34.260233    4148 logs.go:276] 1 containers: [bcbc2a012fc7]
	I0826 04:19:34.260303    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:19:34.271675    4148 logs.go:276] 2 containers: [a2d9258c2ed6 fd26afc6c747]
	I0826 04:19:34.271751    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:19:34.284233    4148 logs.go:276] 1 containers: [d278e2463601]
	I0826 04:19:34.284308    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:19:34.295273    4148 logs.go:276] 1 containers: [ccf3e861a584]
	I0826 04:19:34.295346    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:19:34.306501    4148 logs.go:276] 1 containers: [72b91c706799]
	I0826 04:19:34.306567    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:19:34.317413    4148 logs.go:276] 0 containers: []
	W0826 04:19:34.317430    4148 logs.go:278] No container was found matching "kindnet"
	I0826 04:19:34.317487    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:19:34.335785    4148 logs.go:276] 1 containers: [efce6badf459]
	I0826 04:19:34.335800    4148 logs.go:123] Gathering logs for kube-proxy [ccf3e861a584] ...
	I0826 04:19:34.335806    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccf3e861a584"
	I0826 04:19:34.347732    4148 logs.go:123] Gathering logs for Docker ...
	I0826 04:19:34.347743    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:19:34.371224    4148 logs.go:123] Gathering logs for container status ...
	I0826 04:19:34.371234    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:19:34.382660    4148 logs.go:123] Gathering logs for dmesg ...
	I0826 04:19:34.382671    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:19:34.386923    4148 logs.go:123] Gathering logs for etcd [bcbc2a012fc7] ...
	I0826 04:19:34.386934    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcbc2a012fc7"
	I0826 04:19:34.401140    4148 logs.go:123] Gathering logs for coredns [fd26afc6c747] ...
	I0826 04:19:34.401151    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd26afc6c747"
	I0826 04:19:34.412932    4148 logs.go:123] Gathering logs for coredns [a2d9258c2ed6] ...
	I0826 04:19:34.412945    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d9258c2ed6"
	I0826 04:19:34.424705    4148 logs.go:123] Gathering logs for kube-scheduler [d278e2463601] ...
	I0826 04:19:34.424715    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d278e2463601"
	I0826 04:19:34.440094    4148 logs.go:123] Gathering logs for kube-controller-manager [72b91c706799] ...
	I0826 04:19:34.440107    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72b91c706799"
	I0826 04:19:34.458926    4148 logs.go:123] Gathering logs for storage-provisioner [efce6badf459] ...
	I0826 04:19:34.458939    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efce6badf459"
	I0826 04:19:34.471368    4148 logs.go:123] Gathering logs for kubelet ...
	I0826 04:19:34.471384    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:19:34.507223    4148 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:19:34.507243    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:19:34.550726    4148 logs.go:123] Gathering logs for kube-apiserver [6ff8d511b9ee] ...
	I0826 04:19:34.550739    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff8d511b9ee"
	I0826 04:19:37.068236    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:19:42.070551    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:19:42.071065    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:19:42.104433    4148 logs.go:276] 1 containers: [6ff8d511b9ee]
	I0826 04:19:42.104557    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:19:42.128099    4148 logs.go:276] 1 containers: [bcbc2a012fc7]
	I0826 04:19:42.128194    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:19:42.142237    4148 logs.go:276] 2 containers: [a2d9258c2ed6 fd26afc6c747]
	I0826 04:19:42.142300    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:19:42.154476    4148 logs.go:276] 1 containers: [d278e2463601]
	I0826 04:19:42.154553    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:19:42.167811    4148 logs.go:276] 1 containers: [ccf3e861a584]
	I0826 04:19:42.167881    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:19:42.179402    4148 logs.go:276] 1 containers: [72b91c706799]
	I0826 04:19:42.179474    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:19:42.189986    4148 logs.go:276] 0 containers: []
	W0826 04:19:42.190003    4148 logs.go:278] No container was found matching "kindnet"
	I0826 04:19:42.190058    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:19:42.200656    4148 logs.go:276] 1 containers: [efce6badf459]
	I0826 04:19:42.200670    4148 logs.go:123] Gathering logs for kube-proxy [ccf3e861a584] ...
	I0826 04:19:42.200677    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccf3e861a584"
	I0826 04:19:42.213389    4148 logs.go:123] Gathering logs for storage-provisioner [efce6badf459] ...
	I0826 04:19:42.213399    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efce6badf459"
	I0826 04:19:42.229551    4148 logs.go:123] Gathering logs for Docker ...
	I0826 04:19:42.229562    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:19:42.254527    4148 logs.go:123] Gathering logs for kubelet ...
	I0826 04:19:42.254539    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:19:42.289459    4148 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:19:42.289469    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:19:42.331591    4148 logs.go:123] Gathering logs for kube-apiserver [6ff8d511b9ee] ...
	I0826 04:19:42.331601    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff8d511b9ee"
	I0826 04:19:42.347269    4148 logs.go:123] Gathering logs for etcd [bcbc2a012fc7] ...
	I0826 04:19:42.347281    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcbc2a012fc7"
	I0826 04:19:42.362912    4148 logs.go:123] Gathering logs for coredns [fd26afc6c747] ...
	I0826 04:19:42.362925    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd26afc6c747"
	I0826 04:19:42.375041    4148 logs.go:123] Gathering logs for dmesg ...
	I0826 04:19:42.375053    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:19:42.379284    4148 logs.go:123] Gathering logs for coredns [a2d9258c2ed6] ...
	I0826 04:19:42.379291    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d9258c2ed6"
	I0826 04:19:42.391055    4148 logs.go:123] Gathering logs for kube-scheduler [d278e2463601] ...
	I0826 04:19:42.391068    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d278e2463601"
	I0826 04:19:42.406971    4148 logs.go:123] Gathering logs for kube-controller-manager [72b91c706799] ...
	I0826 04:19:42.406982    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72b91c706799"
	I0826 04:19:42.425048    4148 logs.go:123] Gathering logs for container status ...
	I0826 04:19:42.425059    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:19:44.939398    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:19:49.941588    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:19:49.941676    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:19:49.954176    4148 logs.go:276] 1 containers: [6ff8d511b9ee]
	I0826 04:19:49.954251    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:19:49.966343    4148 logs.go:276] 1 containers: [bcbc2a012fc7]
	I0826 04:19:49.966410    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:19:49.982180    4148 logs.go:276] 2 containers: [a2d9258c2ed6 fd26afc6c747]
	I0826 04:19:49.982252    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:19:49.993911    4148 logs.go:276] 1 containers: [d278e2463601]
	I0826 04:19:49.993984    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:19:50.007445    4148 logs.go:276] 1 containers: [ccf3e861a584]
	I0826 04:19:50.007523    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:19:50.020225    4148 logs.go:276] 1 containers: [72b91c706799]
	I0826 04:19:50.020296    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:19:50.031600    4148 logs.go:276] 0 containers: []
	W0826 04:19:50.031612    4148 logs.go:278] No container was found matching "kindnet"
	I0826 04:19:50.031665    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:19:50.043509    4148 logs.go:276] 1 containers: [efce6badf459]
	I0826 04:19:50.043525    4148 logs.go:123] Gathering logs for kube-proxy [ccf3e861a584] ...
	I0826 04:19:50.043530    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccf3e861a584"
	I0826 04:19:50.065351    4148 logs.go:123] Gathering logs for kube-controller-manager [72b91c706799] ...
	I0826 04:19:50.065365    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72b91c706799"
	I0826 04:19:50.084947    4148 logs.go:123] Gathering logs for dmesg ...
	I0826 04:19:50.084961    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:19:50.090044    4148 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:19:50.090053    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:19:50.128405    4148 logs.go:123] Gathering logs for etcd [bcbc2a012fc7] ...
	I0826 04:19:50.128419    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcbc2a012fc7"
	I0826 04:19:50.148166    4148 logs.go:123] Gathering logs for coredns [a2d9258c2ed6] ...
	I0826 04:19:50.148184    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d9258c2ed6"
	I0826 04:19:50.161400    4148 logs.go:123] Gathering logs for coredns [fd26afc6c747] ...
	I0826 04:19:50.161420    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd26afc6c747"
	I0826 04:19:50.175192    4148 logs.go:123] Gathering logs for kube-scheduler [d278e2463601] ...
	I0826 04:19:50.175205    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d278e2463601"
	I0826 04:19:50.191682    4148 logs.go:123] Gathering logs for Docker ...
	I0826 04:19:50.191695    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:19:50.216010    4148 logs.go:123] Gathering logs for kubelet ...
	I0826 04:19:50.216017    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:19:50.250928    4148 logs.go:123] Gathering logs for kube-apiserver [6ff8d511b9ee] ...
	I0826 04:19:50.250941    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff8d511b9ee"
	I0826 04:19:50.265910    4148 logs.go:123] Gathering logs for storage-provisioner [efce6badf459] ...
	I0826 04:19:50.265925    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efce6badf459"
	I0826 04:19:50.278179    4148 logs.go:123] Gathering logs for container status ...
	I0826 04:19:50.278191    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:19:52.793088    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:19:57.794950    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:19:57.795251    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:19:57.824328    4148 logs.go:276] 1 containers: [6ff8d511b9ee]
	I0826 04:19:57.824456    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:19:57.842725    4148 logs.go:276] 1 containers: [bcbc2a012fc7]
	I0826 04:19:57.842809    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:19:57.857102    4148 logs.go:276] 2 containers: [a2d9258c2ed6 fd26afc6c747]
	I0826 04:19:57.857180    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:19:57.869909    4148 logs.go:276] 1 containers: [d278e2463601]
	I0826 04:19:57.869969    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:19:57.881189    4148 logs.go:276] 1 containers: [ccf3e861a584]
	I0826 04:19:57.881261    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:19:57.896761    4148 logs.go:276] 1 containers: [72b91c706799]
	I0826 04:19:57.896826    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:19:57.907656    4148 logs.go:276] 0 containers: []
	W0826 04:19:57.907668    4148 logs.go:278] No container was found matching "kindnet"
	I0826 04:19:57.907719    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:19:57.918272    4148 logs.go:276] 1 containers: [efce6badf459]
	I0826 04:19:57.918289    4148 logs.go:123] Gathering logs for kubelet ...
	I0826 04:19:57.918294    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:19:57.952115    4148 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:19:57.952122    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:19:57.987691    4148 logs.go:123] Gathering logs for kube-scheduler [d278e2463601] ...
	I0826 04:19:57.987700    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d278e2463601"
	I0826 04:19:58.003155    4148 logs.go:123] Gathering logs for Docker ...
	I0826 04:19:58.003165    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:19:58.026581    4148 logs.go:123] Gathering logs for kube-proxy [ccf3e861a584] ...
	I0826 04:19:58.026591    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccf3e861a584"
	I0826 04:19:58.038739    4148 logs.go:123] Gathering logs for kube-controller-manager [72b91c706799] ...
	I0826 04:19:58.038751    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72b91c706799"
	I0826 04:19:58.056164    4148 logs.go:123] Gathering logs for storage-provisioner [efce6badf459] ...
	I0826 04:19:58.056175    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efce6badf459"
	I0826 04:19:58.068272    4148 logs.go:123] Gathering logs for dmesg ...
	I0826 04:19:58.068283    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:19:58.072979    4148 logs.go:123] Gathering logs for kube-apiserver [6ff8d511b9ee] ...
	I0826 04:19:58.072984    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff8d511b9ee"
	I0826 04:19:58.087845    4148 logs.go:123] Gathering logs for etcd [bcbc2a012fc7] ...
	I0826 04:19:58.087858    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcbc2a012fc7"
	I0826 04:19:58.102031    4148 logs.go:123] Gathering logs for coredns [a2d9258c2ed6] ...
	I0826 04:19:58.102045    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d9258c2ed6"
	I0826 04:19:58.114128    4148 logs.go:123] Gathering logs for coredns [fd26afc6c747] ...
	I0826 04:19:58.114140    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd26afc6c747"
	I0826 04:19:58.126635    4148 logs.go:123] Gathering logs for container status ...
	I0826 04:19:58.126646    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:20:00.642814    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:20:05.645129    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:20:05.645266    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:20:05.659147    4148 logs.go:276] 1 containers: [6ff8d511b9ee]
	I0826 04:20:05.659222    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:20:05.670569    4148 logs.go:276] 1 containers: [bcbc2a012fc7]
	I0826 04:20:05.670646    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:20:05.681337    4148 logs.go:276] 2 containers: [a2d9258c2ed6 fd26afc6c747]
	I0826 04:20:05.681406    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:20:05.691905    4148 logs.go:276] 1 containers: [d278e2463601]
	I0826 04:20:05.691977    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:20:05.702081    4148 logs.go:276] 1 containers: [ccf3e861a584]
	I0826 04:20:05.702149    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:20:05.712128    4148 logs.go:276] 1 containers: [72b91c706799]
	I0826 04:20:05.712193    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:20:05.722040    4148 logs.go:276] 0 containers: []
	W0826 04:20:05.722051    4148 logs.go:278] No container was found matching "kindnet"
	I0826 04:20:05.722108    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:20:05.737081    4148 logs.go:276] 1 containers: [efce6badf459]
	I0826 04:20:05.737096    4148 logs.go:123] Gathering logs for container status ...
	I0826 04:20:05.737101    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:20:05.748589    4148 logs.go:123] Gathering logs for kubelet ...
	I0826 04:20:05.748601    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:20:05.781978    4148 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:20:05.781988    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:20:05.815848    4148 logs.go:123] Gathering logs for coredns [fd26afc6c747] ...
	I0826 04:20:05.815860    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd26afc6c747"
	I0826 04:20:05.827602    4148 logs.go:123] Gathering logs for kube-proxy [ccf3e861a584] ...
	I0826 04:20:05.827618    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccf3e861a584"
	I0826 04:20:05.839543    4148 logs.go:123] Gathering logs for kube-controller-manager [72b91c706799] ...
	I0826 04:20:05.839554    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72b91c706799"
	I0826 04:20:05.860935    4148 logs.go:123] Gathering logs for storage-provisioner [efce6badf459] ...
	I0826 04:20:05.860950    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efce6badf459"
	I0826 04:20:05.876152    4148 logs.go:123] Gathering logs for Docker ...
	I0826 04:20:05.876166    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:20:05.900930    4148 logs.go:123] Gathering logs for dmesg ...
	I0826 04:20:05.900939    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:20:05.905480    4148 logs.go:123] Gathering logs for kube-apiserver [6ff8d511b9ee] ...
	I0826 04:20:05.905489    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff8d511b9ee"
	I0826 04:20:05.920023    4148 logs.go:123] Gathering logs for etcd [bcbc2a012fc7] ...
	I0826 04:20:05.920035    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcbc2a012fc7"
	I0826 04:20:05.934394    4148 logs.go:123] Gathering logs for coredns [a2d9258c2ed6] ...
	I0826 04:20:05.934406    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d9258c2ed6"
	I0826 04:20:05.946380    4148 logs.go:123] Gathering logs for kube-scheduler [d278e2463601] ...
	I0826 04:20:05.946394    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d278e2463601"
	I0826 04:20:08.462992    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:20:13.463489    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:20:13.463783    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:20:13.480130    4148 logs.go:276] 1 containers: [6ff8d511b9ee]
	I0826 04:20:13.480194    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:20:13.492544    4148 logs.go:276] 1 containers: [bcbc2a012fc7]
	I0826 04:20:13.492604    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:20:13.503685    4148 logs.go:276] 4 containers: [3f889327f434 717ca754f70f a2d9258c2ed6 fd26afc6c747]
	I0826 04:20:13.503737    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:20:13.514722    4148 logs.go:276] 1 containers: [d278e2463601]
	I0826 04:20:13.514774    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:20:13.526110    4148 logs.go:276] 1 containers: [ccf3e861a584]
	I0826 04:20:13.526167    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:20:13.537736    4148 logs.go:276] 1 containers: [72b91c706799]
	I0826 04:20:13.537792    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:20:13.548057    4148 logs.go:276] 0 containers: []
	W0826 04:20:13.548067    4148 logs.go:278] No container was found matching "kindnet"
	I0826 04:20:13.548114    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:20:13.558135    4148 logs.go:276] 1 containers: [efce6badf459]
	I0826 04:20:13.558178    4148 logs.go:123] Gathering logs for coredns [fd26afc6c747] ...
	I0826 04:20:13.558183    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd26afc6c747"
	I0826 04:20:13.573626    4148 logs.go:123] Gathering logs for storage-provisioner [efce6badf459] ...
	I0826 04:20:13.573636    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efce6badf459"
	I0826 04:20:13.585635    4148 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:20:13.585644    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:20:13.621880    4148 logs.go:123] Gathering logs for etcd [bcbc2a012fc7] ...
	I0826 04:20:13.621894    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcbc2a012fc7"
	I0826 04:20:13.635807    4148 logs.go:123] Gathering logs for coredns [a2d9258c2ed6] ...
	I0826 04:20:13.635820    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d9258c2ed6"
	I0826 04:20:13.646915    4148 logs.go:123] Gathering logs for coredns [717ca754f70f] ...
	I0826 04:20:13.646929    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717ca754f70f"
	I0826 04:20:13.663172    4148 logs.go:123] Gathering logs for kube-controller-manager [72b91c706799] ...
	I0826 04:20:13.663184    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72b91c706799"
	I0826 04:20:13.680518    4148 logs.go:123] Gathering logs for Docker ...
	I0826 04:20:13.680530    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:20:13.706069    4148 logs.go:123] Gathering logs for kube-proxy [ccf3e861a584] ...
	I0826 04:20:13.706076    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccf3e861a584"
	I0826 04:20:13.717895    4148 logs.go:123] Gathering logs for dmesg ...
	I0826 04:20:13.717909    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:20:13.722436    4148 logs.go:123] Gathering logs for kube-apiserver [6ff8d511b9ee] ...
	I0826 04:20:13.722444    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff8d511b9ee"
	I0826 04:20:13.736995    4148 logs.go:123] Gathering logs for kube-scheduler [d278e2463601] ...
	I0826 04:20:13.737007    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d278e2463601"
	I0826 04:20:13.751809    4148 logs.go:123] Gathering logs for kubelet ...
	I0826 04:20:13.751821    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:20:13.785971    4148 logs.go:123] Gathering logs for coredns [3f889327f434] ...
	I0826 04:20:13.785986    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f889327f434"
	I0826 04:20:13.797458    4148 logs.go:123] Gathering logs for container status ...
	I0826 04:20:13.797471    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:20:16.311445    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:20:21.313729    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:20:21.313949    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:20:21.343286    4148 logs.go:276] 1 containers: [6ff8d511b9ee]
	I0826 04:20:21.343380    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:20:21.364210    4148 logs.go:276] 1 containers: [bcbc2a012fc7]
	I0826 04:20:21.364289    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:20:21.377471    4148 logs.go:276] 4 containers: [3f889327f434 717ca754f70f a2d9258c2ed6 fd26afc6c747]
	I0826 04:20:21.377546    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:20:21.388854    4148 logs.go:276] 1 containers: [d278e2463601]
	I0826 04:20:21.388917    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:20:21.398888    4148 logs.go:276] 1 containers: [ccf3e861a584]
	I0826 04:20:21.398955    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:20:21.409090    4148 logs.go:276] 1 containers: [72b91c706799]
	I0826 04:20:21.409163    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:20:21.419175    4148 logs.go:276] 0 containers: []
	W0826 04:20:21.419187    4148 logs.go:278] No container was found matching "kindnet"
	I0826 04:20:21.419246    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:20:21.429658    4148 logs.go:276] 1 containers: [efce6badf459]
	I0826 04:20:21.429677    4148 logs.go:123] Gathering logs for kubelet ...
	I0826 04:20:21.429682    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:20:21.463143    4148 logs.go:123] Gathering logs for kube-proxy [ccf3e861a584] ...
	I0826 04:20:21.463154    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccf3e861a584"
	I0826 04:20:21.477585    4148 logs.go:123] Gathering logs for kube-controller-manager [72b91c706799] ...
	I0826 04:20:21.477594    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72b91c706799"
	I0826 04:20:21.498503    4148 logs.go:123] Gathering logs for dmesg ...
	I0826 04:20:21.498514    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:20:21.502795    4148 logs.go:123] Gathering logs for kube-apiserver [6ff8d511b9ee] ...
	I0826 04:20:21.502805    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff8d511b9ee"
	I0826 04:20:21.518966    4148 logs.go:123] Gathering logs for coredns [717ca754f70f] ...
	I0826 04:20:21.518978    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717ca754f70f"
	I0826 04:20:21.530701    4148 logs.go:123] Gathering logs for coredns [fd26afc6c747] ...
	I0826 04:20:21.530716    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd26afc6c747"
	I0826 04:20:21.542723    4148 logs.go:123] Gathering logs for kube-scheduler [d278e2463601] ...
	I0826 04:20:21.542737    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d278e2463601"
	I0826 04:20:21.566731    4148 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:20:21.566749    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:20:21.609705    4148 logs.go:123] Gathering logs for coredns [3f889327f434] ...
	I0826 04:20:21.609719    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f889327f434"
	I0826 04:20:21.622054    4148 logs.go:123] Gathering logs for Docker ...
	I0826 04:20:21.622065    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:20:21.646585    4148 logs.go:123] Gathering logs for etcd [bcbc2a012fc7] ...
	I0826 04:20:21.646597    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcbc2a012fc7"
	I0826 04:20:21.660353    4148 logs.go:123] Gathering logs for coredns [a2d9258c2ed6] ...
	I0826 04:20:21.660363    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d9258c2ed6"
	I0826 04:20:21.672267    4148 logs.go:123] Gathering logs for storage-provisioner [efce6badf459] ...
	I0826 04:20:21.672281    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efce6badf459"
	I0826 04:20:21.684321    4148 logs.go:123] Gathering logs for container status ...
	I0826 04:20:21.684331    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:20:24.198658    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:20:29.201074    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:20:29.201250    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:20:29.222883    4148 logs.go:276] 1 containers: [6ff8d511b9ee]
	I0826 04:20:29.222971    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:20:29.236965    4148 logs.go:276] 1 containers: [bcbc2a012fc7]
	I0826 04:20:29.237040    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:20:29.248616    4148 logs.go:276] 4 containers: [3f889327f434 717ca754f70f a2d9258c2ed6 fd26afc6c747]
	I0826 04:20:29.248684    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:20:29.259373    4148 logs.go:276] 1 containers: [d278e2463601]
	I0826 04:20:29.259443    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:20:29.270052    4148 logs.go:276] 1 containers: [ccf3e861a584]
	I0826 04:20:29.270123    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:20:29.280833    4148 logs.go:276] 1 containers: [72b91c706799]
	I0826 04:20:29.280902    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:20:29.292493    4148 logs.go:276] 0 containers: []
	W0826 04:20:29.292510    4148 logs.go:278] No container was found matching "kindnet"
	I0826 04:20:29.292560    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:20:29.303876    4148 logs.go:276] 1 containers: [efce6badf459]
	I0826 04:20:29.303894    4148 logs.go:123] Gathering logs for kube-apiserver [6ff8d511b9ee] ...
	I0826 04:20:29.303899    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff8d511b9ee"
	I0826 04:20:29.318654    4148 logs.go:123] Gathering logs for coredns [3f889327f434] ...
	I0826 04:20:29.318666    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f889327f434"
	I0826 04:20:29.330568    4148 logs.go:123] Gathering logs for coredns [717ca754f70f] ...
	I0826 04:20:29.330582    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717ca754f70f"
	I0826 04:20:29.343217    4148 logs.go:123] Gathering logs for kube-scheduler [d278e2463601] ...
	I0826 04:20:29.343227    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d278e2463601"
	I0826 04:20:29.358000    4148 logs.go:123] Gathering logs for storage-provisioner [efce6badf459] ...
	I0826 04:20:29.358010    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efce6badf459"
	I0826 04:20:29.370216    4148 logs.go:123] Gathering logs for dmesg ...
	I0826 04:20:29.370228    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:20:29.379506    4148 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:20:29.379519    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:20:29.413744    4148 logs.go:123] Gathering logs for coredns [a2d9258c2ed6] ...
	I0826 04:20:29.413755    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d9258c2ed6"
	I0826 04:20:29.425314    4148 logs.go:123] Gathering logs for kube-proxy [ccf3e861a584] ...
	I0826 04:20:29.425327    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccf3e861a584"
	I0826 04:20:29.436797    4148 logs.go:123] Gathering logs for kube-controller-manager [72b91c706799] ...
	I0826 04:20:29.436808    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72b91c706799"
	I0826 04:20:29.454229    4148 logs.go:123] Gathering logs for Docker ...
	I0826 04:20:29.454241    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:20:29.478177    4148 logs.go:123] Gathering logs for kubelet ...
	I0826 04:20:29.478185    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:20:29.511026    4148 logs.go:123] Gathering logs for etcd [bcbc2a012fc7] ...
	I0826 04:20:29.511035    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcbc2a012fc7"
	I0826 04:20:29.524867    4148 logs.go:123] Gathering logs for coredns [fd26afc6c747] ...
	I0826 04:20:29.524877    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd26afc6c747"
	I0826 04:20:29.536700    4148 logs.go:123] Gathering logs for container status ...
	I0826 04:20:29.536710    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:20:32.050542    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:20:37.052794    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:20:37.052940    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:20:37.067960    4148 logs.go:276] 1 containers: [6ff8d511b9ee]
	I0826 04:20:37.068045    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:20:37.080180    4148 logs.go:276] 1 containers: [bcbc2a012fc7]
	I0826 04:20:37.080249    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:20:37.090989    4148 logs.go:276] 4 containers: [3f889327f434 717ca754f70f a2d9258c2ed6 fd26afc6c747]
	I0826 04:20:37.091056    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:20:37.101002    4148 logs.go:276] 1 containers: [d278e2463601]
	I0826 04:20:37.101070    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:20:37.111216    4148 logs.go:276] 1 containers: [ccf3e861a584]
	I0826 04:20:37.111279    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:20:37.122145    4148 logs.go:276] 1 containers: [72b91c706799]
	I0826 04:20:37.122215    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:20:37.132275    4148 logs.go:276] 0 containers: []
	W0826 04:20:37.132287    4148 logs.go:278] No container was found matching "kindnet"
	I0826 04:20:37.132342    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:20:37.142426    4148 logs.go:276] 1 containers: [efce6badf459]
	I0826 04:20:37.142441    4148 logs.go:123] Gathering logs for dmesg ...
	I0826 04:20:37.142446    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:20:37.146709    4148 logs.go:123] Gathering logs for kube-apiserver [6ff8d511b9ee] ...
	I0826 04:20:37.146718    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff8d511b9ee"
	I0826 04:20:37.160846    4148 logs.go:123] Gathering logs for coredns [3f889327f434] ...
	I0826 04:20:37.160859    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f889327f434"
	I0826 04:20:37.176386    4148 logs.go:123] Gathering logs for coredns [fd26afc6c747] ...
	I0826 04:20:37.176398    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd26afc6c747"
	I0826 04:20:37.187812    4148 logs.go:123] Gathering logs for storage-provisioner [efce6badf459] ...
	I0826 04:20:37.187826    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efce6badf459"
	I0826 04:20:37.199440    4148 logs.go:123] Gathering logs for kubelet ...
	I0826 04:20:37.199453    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:20:37.232480    4148 logs.go:123] Gathering logs for etcd [bcbc2a012fc7] ...
	I0826 04:20:37.232488    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcbc2a012fc7"
	I0826 04:20:37.246589    4148 logs.go:123] Gathering logs for coredns [a2d9258c2ed6] ...
	I0826 04:20:37.246602    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d9258c2ed6"
	I0826 04:20:37.258284    4148 logs.go:123] Gathering logs for kube-controller-manager [72b91c706799] ...
	I0826 04:20:37.258295    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72b91c706799"
	I0826 04:20:37.275469    4148 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:20:37.275482    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:20:37.314381    4148 logs.go:123] Gathering logs for kube-scheduler [d278e2463601] ...
	I0826 04:20:37.314395    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d278e2463601"
	I0826 04:20:37.329963    4148 logs.go:123] Gathering logs for coredns [717ca754f70f] ...
	I0826 04:20:37.329977    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717ca754f70f"
	I0826 04:20:37.341289    4148 logs.go:123] Gathering logs for Docker ...
	I0826 04:20:37.341299    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:20:37.365318    4148 logs.go:123] Gathering logs for container status ...
	I0826 04:20:37.365326    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:20:37.376819    4148 logs.go:123] Gathering logs for kube-proxy [ccf3e861a584] ...
	I0826 04:20:37.376832    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccf3e861a584"
	I0826 04:20:39.891465    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:20:44.893691    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:20:44.893809    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:20:44.904588    4148 logs.go:276] 1 containers: [6ff8d511b9ee]
	I0826 04:20:44.904656    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:20:44.915389    4148 logs.go:276] 1 containers: [bcbc2a012fc7]
	I0826 04:20:44.915454    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:20:44.925371    4148 logs.go:276] 4 containers: [3f889327f434 717ca754f70f a2d9258c2ed6 fd26afc6c747]
	I0826 04:20:44.925432    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:20:44.936045    4148 logs.go:276] 1 containers: [d278e2463601]
	I0826 04:20:44.936119    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:20:44.946391    4148 logs.go:276] 1 containers: [ccf3e861a584]
	I0826 04:20:44.946458    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:20:44.957352    4148 logs.go:276] 1 containers: [72b91c706799]
	I0826 04:20:44.957410    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:20:44.973854    4148 logs.go:276] 0 containers: []
	W0826 04:20:44.973866    4148 logs.go:278] No container was found matching "kindnet"
	I0826 04:20:44.973925    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:20:44.993061    4148 logs.go:276] 1 containers: [efce6badf459]
	I0826 04:20:44.993082    4148 logs.go:123] Gathering logs for kube-controller-manager [72b91c706799] ...
	I0826 04:20:44.993087    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72b91c706799"
	I0826 04:20:45.010811    4148 logs.go:123] Gathering logs for storage-provisioner [efce6badf459] ...
	I0826 04:20:45.010823    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efce6badf459"
	I0826 04:20:45.022362    4148 logs.go:123] Gathering logs for container status ...
	I0826 04:20:45.022377    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:20:45.033839    4148 logs.go:123] Gathering logs for dmesg ...
	I0826 04:20:45.033854    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:20:45.038380    4148 logs.go:123] Gathering logs for etcd [bcbc2a012fc7] ...
	I0826 04:20:45.038387    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcbc2a012fc7"
	I0826 04:20:45.052295    4148 logs.go:123] Gathering logs for coredns [3f889327f434] ...
	I0826 04:20:45.052307    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f889327f434"
	I0826 04:20:45.065467    4148 logs.go:123] Gathering logs for kube-scheduler [d278e2463601] ...
	I0826 04:20:45.065480    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d278e2463601"
	I0826 04:20:45.083618    4148 logs.go:123] Gathering logs for kube-proxy [ccf3e861a584] ...
	I0826 04:20:45.083630    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccf3e861a584"
	I0826 04:20:45.094861    4148 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:20:45.094875    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:20:45.128771    4148 logs.go:123] Gathering logs for kube-apiserver [6ff8d511b9ee] ...
	I0826 04:20:45.128784    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff8d511b9ee"
	I0826 04:20:45.143843    4148 logs.go:123] Gathering logs for coredns [a2d9258c2ed6] ...
	I0826 04:20:45.143856    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d9258c2ed6"
	I0826 04:20:45.155817    4148 logs.go:123] Gathering logs for coredns [fd26afc6c747] ...
	I0826 04:20:45.155837    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd26afc6c747"
	I0826 04:20:45.167380    4148 logs.go:123] Gathering logs for kubelet ...
	I0826 04:20:45.167393    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:20:45.199785    4148 logs.go:123] Gathering logs for Docker ...
	I0826 04:20:45.199795    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:20:45.223172    4148 logs.go:123] Gathering logs for coredns [717ca754f70f] ...
	I0826 04:20:45.223180    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717ca754f70f"
	I0826 04:20:47.736224    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:20:52.738450    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:20:52.738603    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:20:52.755537    4148 logs.go:276] 1 containers: [6ff8d511b9ee]
	I0826 04:20:52.755627    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:20:52.769341    4148 logs.go:276] 1 containers: [bcbc2a012fc7]
	I0826 04:20:52.769414    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:20:52.782355    4148 logs.go:276] 4 containers: [3f889327f434 717ca754f70f a2d9258c2ed6 fd26afc6c747]
	I0826 04:20:52.782427    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:20:52.792665    4148 logs.go:276] 1 containers: [d278e2463601]
	I0826 04:20:52.792736    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:20:52.803285    4148 logs.go:276] 1 containers: [ccf3e861a584]
	I0826 04:20:52.803350    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:20:52.813774    4148 logs.go:276] 1 containers: [72b91c706799]
	I0826 04:20:52.813846    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:20:52.824154    4148 logs.go:276] 0 containers: []
	W0826 04:20:52.824165    4148 logs.go:278] No container was found matching "kindnet"
	I0826 04:20:52.824219    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:20:52.835362    4148 logs.go:276] 1 containers: [efce6badf459]
	I0826 04:20:52.835382    4148 logs.go:123] Gathering logs for kubelet ...
	I0826 04:20:52.835387    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:20:52.870377    4148 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:20:52.870386    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:20:52.904593    4148 logs.go:123] Gathering logs for kube-apiserver [6ff8d511b9ee] ...
	I0826 04:20:52.904606    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff8d511b9ee"
	I0826 04:20:52.919570    4148 logs.go:123] Gathering logs for container status ...
	I0826 04:20:52.919582    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:20:52.931133    4148 logs.go:123] Gathering logs for etcd [bcbc2a012fc7] ...
	I0826 04:20:52.931150    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcbc2a012fc7"
	I0826 04:20:52.945642    4148 logs.go:123] Gathering logs for coredns [3f889327f434] ...
	I0826 04:20:52.945657    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f889327f434"
	I0826 04:20:52.958291    4148 logs.go:123] Gathering logs for coredns [717ca754f70f] ...
	I0826 04:20:52.958304    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717ca754f70f"
	I0826 04:20:52.974062    4148 logs.go:123] Gathering logs for Docker ...
	I0826 04:20:52.974073    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:20:52.998834    4148 logs.go:123] Gathering logs for dmesg ...
	I0826 04:20:52.998842    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:20:53.002798    4148 logs.go:123] Gathering logs for coredns [a2d9258c2ed6] ...
	I0826 04:20:53.002807    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d9258c2ed6"
	I0826 04:20:53.014478    4148 logs.go:123] Gathering logs for storage-provisioner [efce6badf459] ...
	I0826 04:20:53.014489    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efce6badf459"
	I0826 04:20:53.026434    4148 logs.go:123] Gathering logs for coredns [fd26afc6c747] ...
	I0826 04:20:53.026450    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd26afc6c747"
	I0826 04:20:53.037911    4148 logs.go:123] Gathering logs for kube-scheduler [d278e2463601] ...
	I0826 04:20:53.037921    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d278e2463601"
	I0826 04:20:53.053121    4148 logs.go:123] Gathering logs for kube-proxy [ccf3e861a584] ...
	I0826 04:20:53.053133    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccf3e861a584"
	I0826 04:20:53.065136    4148 logs.go:123] Gathering logs for kube-controller-manager [72b91c706799] ...
	I0826 04:20:53.065148    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72b91c706799"
	I0826 04:20:55.584596    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:21:00.585559    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:21:00.585857    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:21:00.612448    4148 logs.go:276] 1 containers: [6ff8d511b9ee]
	I0826 04:21:00.612576    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:21:00.629414    4148 logs.go:276] 1 containers: [bcbc2a012fc7]
	I0826 04:21:00.629516    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:21:00.643390    4148 logs.go:276] 4 containers: [3f889327f434 717ca754f70f a2d9258c2ed6 fd26afc6c747]
	I0826 04:21:00.643465    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:21:00.654644    4148 logs.go:276] 1 containers: [d278e2463601]
	I0826 04:21:00.654707    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:21:00.664881    4148 logs.go:276] 1 containers: [ccf3e861a584]
	I0826 04:21:00.664946    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:21:00.675561    4148 logs.go:276] 1 containers: [72b91c706799]
	I0826 04:21:00.675626    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:21:00.685946    4148 logs.go:276] 0 containers: []
	W0826 04:21:00.685959    4148 logs.go:278] No container was found matching "kindnet"
	I0826 04:21:00.686014    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:21:00.699441    4148 logs.go:276] 1 containers: [efce6badf459]
	I0826 04:21:00.699458    4148 logs.go:123] Gathering logs for kube-scheduler [d278e2463601] ...
	I0826 04:21:00.699463    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d278e2463601"
	I0826 04:21:00.713964    4148 logs.go:123] Gathering logs for kube-proxy [ccf3e861a584] ...
	I0826 04:21:00.713976    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccf3e861a584"
	I0826 04:21:00.726020    4148 logs.go:123] Gathering logs for kube-controller-manager [72b91c706799] ...
	I0826 04:21:00.726033    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72b91c706799"
	I0826 04:21:00.743599    4148 logs.go:123] Gathering logs for coredns [a2d9258c2ed6] ...
	I0826 04:21:00.743610    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d9258c2ed6"
	I0826 04:21:00.754969    4148 logs.go:123] Gathering logs for container status ...
	I0826 04:21:00.754982    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:21:00.766547    4148 logs.go:123] Gathering logs for etcd [bcbc2a012fc7] ...
	I0826 04:21:00.766561    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcbc2a012fc7"
	I0826 04:21:00.782888    4148 logs.go:123] Gathering logs for coredns [3f889327f434] ...
	I0826 04:21:00.782902    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f889327f434"
	I0826 04:21:00.795479    4148 logs.go:123] Gathering logs for coredns [717ca754f70f] ...
	I0826 04:21:00.795493    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717ca754f70f"
	I0826 04:21:00.807018    4148 logs.go:123] Gathering logs for kube-apiserver [6ff8d511b9ee] ...
	I0826 04:21:00.807028    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff8d511b9ee"
	I0826 04:21:00.821332    4148 logs.go:123] Gathering logs for coredns [fd26afc6c747] ...
	I0826 04:21:00.821343    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd26afc6c747"
	I0826 04:21:00.833238    4148 logs.go:123] Gathering logs for Docker ...
	I0826 04:21:00.833248    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:21:00.860435    4148 logs.go:123] Gathering logs for kubelet ...
	I0826 04:21:00.860447    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:21:00.895798    4148 logs.go:123] Gathering logs for dmesg ...
	I0826 04:21:00.895807    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:21:00.900149    4148 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:21:00.900158    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:21:00.938148    4148 logs.go:123] Gathering logs for storage-provisioner [efce6badf459] ...
	I0826 04:21:00.938158    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efce6badf459"
	I0826 04:21:03.451472    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:21:08.453688    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:21:08.453902    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:21:08.473224    4148 logs.go:276] 1 containers: [6ff8d511b9ee]
	I0826 04:21:08.473325    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:21:08.488507    4148 logs.go:276] 1 containers: [bcbc2a012fc7]
	I0826 04:21:08.488591    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:21:08.502368    4148 logs.go:276] 4 containers: [3f889327f434 717ca754f70f a2d9258c2ed6 fd26afc6c747]
	I0826 04:21:08.502440    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:21:08.513874    4148 logs.go:276] 1 containers: [d278e2463601]
	I0826 04:21:08.513949    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:21:08.524931    4148 logs.go:276] 1 containers: [ccf3e861a584]
	I0826 04:21:08.524999    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:21:08.535631    4148 logs.go:276] 1 containers: [72b91c706799]
	I0826 04:21:08.535693    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:21:08.545270    4148 logs.go:276] 0 containers: []
	W0826 04:21:08.545286    4148 logs.go:278] No container was found matching "kindnet"
	I0826 04:21:08.545345    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:21:08.562536    4148 logs.go:276] 1 containers: [efce6badf459]
	I0826 04:21:08.562556    4148 logs.go:123] Gathering logs for coredns [3f889327f434] ...
	I0826 04:21:08.562561    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f889327f434"
	I0826 04:21:08.574580    4148 logs.go:123] Gathering logs for container status ...
	I0826 04:21:08.574592    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:21:08.585875    4148 logs.go:123] Gathering logs for kubelet ...
	I0826 04:21:08.585888    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:21:08.619885    4148 logs.go:123] Gathering logs for dmesg ...
	I0826 04:21:08.619893    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:21:08.624454    4148 logs.go:123] Gathering logs for etcd [bcbc2a012fc7] ...
	I0826 04:21:08.624460    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcbc2a012fc7"
	I0826 04:21:08.638776    4148 logs.go:123] Gathering logs for coredns [717ca754f70f] ...
	I0826 04:21:08.638789    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717ca754f70f"
	I0826 04:21:08.650471    4148 logs.go:123] Gathering logs for Docker ...
	I0826 04:21:08.650485    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:21:08.674355    4148 logs.go:123] Gathering logs for kube-apiserver [6ff8d511b9ee] ...
	I0826 04:21:08.674366    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff8d511b9ee"
	I0826 04:21:08.688784    4148 logs.go:123] Gathering logs for kube-scheduler [d278e2463601] ...
	I0826 04:21:08.688799    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d278e2463601"
	I0826 04:21:08.703804    4148 logs.go:123] Gathering logs for kube-proxy [ccf3e861a584] ...
	I0826 04:21:08.703818    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccf3e861a584"
	I0826 04:21:08.715577    4148 logs.go:123] Gathering logs for storage-provisioner [efce6badf459] ...
	I0826 04:21:08.715590    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efce6badf459"
	I0826 04:21:08.735418    4148 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:21:08.735431    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:21:08.769942    4148 logs.go:123] Gathering logs for coredns [a2d9258c2ed6] ...
	I0826 04:21:08.769957    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d9258c2ed6"
	I0826 04:21:08.784126    4148 logs.go:123] Gathering logs for coredns [fd26afc6c747] ...
	I0826 04:21:08.784139    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd26afc6c747"
	I0826 04:21:08.795606    4148 logs.go:123] Gathering logs for kube-controller-manager [72b91c706799] ...
	I0826 04:21:08.795618    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72b91c706799"
	I0826 04:21:11.316475    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:21:16.318229    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:21:16.318440    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:21:16.338800    4148 logs.go:276] 1 containers: [6ff8d511b9ee]
	I0826 04:21:16.338886    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:21:16.353123    4148 logs.go:276] 1 containers: [bcbc2a012fc7]
	I0826 04:21:16.353191    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:21:16.369540    4148 logs.go:276] 4 containers: [3f889327f434 717ca754f70f a2d9258c2ed6 fd26afc6c747]
	I0826 04:21:16.369604    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:21:16.380136    4148 logs.go:276] 1 containers: [d278e2463601]
	I0826 04:21:16.380201    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:21:16.394734    4148 logs.go:276] 1 containers: [ccf3e861a584]
	I0826 04:21:16.394791    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:21:16.404759    4148 logs.go:276] 1 containers: [72b91c706799]
	I0826 04:21:16.404820    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:21:16.415416    4148 logs.go:276] 0 containers: []
	W0826 04:21:16.415429    4148 logs.go:278] No container was found matching "kindnet"
	I0826 04:21:16.415491    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:21:16.425756    4148 logs.go:276] 1 containers: [efce6badf459]
	I0826 04:21:16.425779    4148 logs.go:123] Gathering logs for coredns [3f889327f434] ...
	I0826 04:21:16.425783    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f889327f434"
	I0826 04:21:16.438714    4148 logs.go:123] Gathering logs for coredns [fd26afc6c747] ...
	I0826 04:21:16.438725    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd26afc6c747"
	I0826 04:21:16.450041    4148 logs.go:123] Gathering logs for kubelet ...
	I0826 04:21:16.450053    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:21:16.484618    4148 logs.go:123] Gathering logs for kube-apiserver [6ff8d511b9ee] ...
	I0826 04:21:16.484628    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff8d511b9ee"
	I0826 04:21:16.499063    4148 logs.go:123] Gathering logs for etcd [bcbc2a012fc7] ...
	I0826 04:21:16.499073    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcbc2a012fc7"
	I0826 04:21:16.512517    4148 logs.go:123] Gathering logs for coredns [717ca754f70f] ...
	I0826 04:21:16.512529    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717ca754f70f"
	I0826 04:21:16.524746    4148 logs.go:123] Gathering logs for storage-provisioner [efce6badf459] ...
	I0826 04:21:16.524757    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efce6badf459"
	I0826 04:21:16.536041    4148 logs.go:123] Gathering logs for container status ...
	I0826 04:21:16.536056    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:21:16.547913    4148 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:21:16.547924    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:21:16.583010    4148 logs.go:123] Gathering logs for coredns [a2d9258c2ed6] ...
	I0826 04:21:16.583021    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d9258c2ed6"
	I0826 04:21:16.595005    4148 logs.go:123] Gathering logs for kube-controller-manager [72b91c706799] ...
	I0826 04:21:16.595016    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72b91c706799"
	I0826 04:21:16.612890    4148 logs.go:123] Gathering logs for dmesg ...
	I0826 04:21:16.612899    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:21:16.617430    4148 logs.go:123] Gathering logs for kube-scheduler [d278e2463601] ...
	I0826 04:21:16.617436    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d278e2463601"
	I0826 04:21:16.632228    4148 logs.go:123] Gathering logs for kube-proxy [ccf3e861a584] ...
	I0826 04:21:16.632242    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccf3e861a584"
	I0826 04:21:16.644386    4148 logs.go:123] Gathering logs for Docker ...
	I0826 04:21:16.644395    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:21:19.165755    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:21:24.163172    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:21:24.163580    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:21:24.195431    4148 logs.go:276] 1 containers: [6ff8d511b9ee]
	I0826 04:21:24.195566    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:21:24.214342    4148 logs.go:276] 1 containers: [bcbc2a012fc7]
	I0826 04:21:24.214425    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:21:24.228208    4148 logs.go:276] 4 containers: [3f889327f434 717ca754f70f a2d9258c2ed6 fd26afc6c747]
	I0826 04:21:24.228286    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:21:24.240142    4148 logs.go:276] 1 containers: [d278e2463601]
	I0826 04:21:24.240211    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:21:24.250866    4148 logs.go:276] 1 containers: [ccf3e861a584]
	I0826 04:21:24.250932    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:21:24.261482    4148 logs.go:276] 1 containers: [72b91c706799]
	I0826 04:21:24.261549    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:21:24.271489    4148 logs.go:276] 0 containers: []
	W0826 04:21:24.271498    4148 logs.go:278] No container was found matching "kindnet"
	I0826 04:21:24.271548    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:21:24.284305    4148 logs.go:276] 1 containers: [efce6badf459]
	I0826 04:21:24.284326    4148 logs.go:123] Gathering logs for storage-provisioner [efce6badf459] ...
	I0826 04:21:24.284331    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efce6badf459"
	I0826 04:21:24.297117    4148 logs.go:123] Gathering logs for Docker ...
	I0826 04:21:24.297133    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:21:24.322347    4148 logs.go:123] Gathering logs for container status ...
	I0826 04:21:24.322356    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:21:24.333963    4148 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:21:24.333976    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:21:24.368670    4148 logs.go:123] Gathering logs for etcd [bcbc2a012fc7] ...
	I0826 04:21:24.368682    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcbc2a012fc7"
	I0826 04:21:24.386943    4148 logs.go:123] Gathering logs for coredns [a2d9258c2ed6] ...
	I0826 04:21:24.386954    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d9258c2ed6"
	I0826 04:21:24.398882    4148 logs.go:123] Gathering logs for kube-scheduler [d278e2463601] ...
	I0826 04:21:24.398897    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d278e2463601"
	I0826 04:21:24.413502    4148 logs.go:123] Gathering logs for kube-proxy [ccf3e861a584] ...
	I0826 04:21:24.413515    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccf3e861a584"
	I0826 04:21:24.424971    4148 logs.go:123] Gathering logs for kube-controller-manager [72b91c706799] ...
	I0826 04:21:24.424985    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72b91c706799"
	I0826 04:21:24.442892    4148 logs.go:123] Gathering logs for kubelet ...
	I0826 04:21:24.442903    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:21:24.477867    4148 logs.go:123] Gathering logs for coredns [717ca754f70f] ...
	I0826 04:21:24.477878    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717ca754f70f"
	I0826 04:21:24.491720    4148 logs.go:123] Gathering logs for coredns [fd26afc6c747] ...
	I0826 04:21:24.491731    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd26afc6c747"
	I0826 04:21:24.503603    4148 logs.go:123] Gathering logs for dmesg ...
	I0826 04:21:24.503615    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:21:24.507944    4148 logs.go:123] Gathering logs for kube-apiserver [6ff8d511b9ee] ...
	I0826 04:21:24.507953    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff8d511b9ee"
	I0826 04:21:24.522470    4148 logs.go:123] Gathering logs for coredns [3f889327f434] ...
	I0826 04:21:24.522482    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f889327f434"
	I0826 04:21:27.034299    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:21:32.033592    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:21:32.033758    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:21:32.044792    4148 logs.go:276] 1 containers: [6ff8d511b9ee]
	I0826 04:21:32.044864    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:21:32.055562    4148 logs.go:276] 1 containers: [bcbc2a012fc7]
	I0826 04:21:32.055632    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:21:32.066323    4148 logs.go:276] 4 containers: [3f889327f434 717ca754f70f a2d9258c2ed6 fd26afc6c747]
	I0826 04:21:32.066394    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:21:32.076393    4148 logs.go:276] 1 containers: [d278e2463601]
	I0826 04:21:32.076460    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:21:32.087006    4148 logs.go:276] 1 containers: [ccf3e861a584]
	I0826 04:21:32.087076    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:21:32.098399    4148 logs.go:276] 1 containers: [72b91c706799]
	I0826 04:21:32.098466    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:21:32.108953    4148 logs.go:276] 0 containers: []
	W0826 04:21:32.108967    4148 logs.go:278] No container was found matching "kindnet"
	I0826 04:21:32.109026    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:21:32.120958    4148 logs.go:276] 1 containers: [efce6badf459]
	I0826 04:21:32.120974    4148 logs.go:123] Gathering logs for kube-scheduler [d278e2463601] ...
	I0826 04:21:32.120979    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d278e2463601"
	I0826 04:21:32.136674    4148 logs.go:123] Gathering logs for container status ...
	I0826 04:21:32.136685    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:21:32.148762    4148 logs.go:123] Gathering logs for coredns [3f889327f434] ...
	I0826 04:21:32.148773    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f889327f434"
	I0826 04:21:32.161291    4148 logs.go:123] Gathering logs for coredns [717ca754f70f] ...
	I0826 04:21:32.161302    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717ca754f70f"
	I0826 04:21:32.172459    4148 logs.go:123] Gathering logs for kube-proxy [ccf3e861a584] ...
	I0826 04:21:32.172470    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccf3e861a584"
	I0826 04:21:32.183953    4148 logs.go:123] Gathering logs for Docker ...
	I0826 04:21:32.183964    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:21:32.208683    4148 logs.go:123] Gathering logs for kubelet ...
	I0826 04:21:32.208692    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:21:32.242665    4148 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:21:32.242672    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:21:32.279862    4148 logs.go:123] Gathering logs for storage-provisioner [efce6badf459] ...
	I0826 04:21:32.279876    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efce6badf459"
	I0826 04:21:32.291585    4148 logs.go:123] Gathering logs for kube-apiserver [6ff8d511b9ee] ...
	I0826 04:21:32.291601    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff8d511b9ee"
	I0826 04:21:32.309905    4148 logs.go:123] Gathering logs for etcd [bcbc2a012fc7] ...
	I0826 04:21:32.309919    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcbc2a012fc7"
	I0826 04:21:32.324201    4148 logs.go:123] Gathering logs for coredns [fd26afc6c747] ...
	I0826 04:21:32.324211    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd26afc6c747"
	I0826 04:21:32.335949    4148 logs.go:123] Gathering logs for kube-controller-manager [72b91c706799] ...
	I0826 04:21:32.335963    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72b91c706799"
	I0826 04:21:32.354314    4148 logs.go:123] Gathering logs for dmesg ...
	I0826 04:21:32.354325    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:21:32.358820    4148 logs.go:123] Gathering logs for coredns [a2d9258c2ed6] ...
	I0826 04:21:32.358825    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d9258c2ed6"
	I0826 04:21:34.871644    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:21:39.870430    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:21:39.870643    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:21:39.886862    4148 logs.go:276] 1 containers: [6ff8d511b9ee]
	I0826 04:21:39.886942    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:21:39.899568    4148 logs.go:276] 1 containers: [bcbc2a012fc7]
	I0826 04:21:39.899643    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:21:39.910787    4148 logs.go:276] 4 containers: [3f889327f434 717ca754f70f a2d9258c2ed6 fd26afc6c747]
	I0826 04:21:39.910856    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:21:39.921447    4148 logs.go:276] 1 containers: [d278e2463601]
	I0826 04:21:39.921518    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:21:39.932094    4148 logs.go:276] 1 containers: [ccf3e861a584]
	I0826 04:21:39.932160    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:21:39.942757    4148 logs.go:276] 1 containers: [72b91c706799]
	I0826 04:21:39.942821    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:21:39.953063    4148 logs.go:276] 0 containers: []
	W0826 04:21:39.953074    4148 logs.go:278] No container was found matching "kindnet"
	I0826 04:21:39.953128    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:21:39.967912    4148 logs.go:276] 1 containers: [efce6badf459]
	I0826 04:21:39.967930    4148 logs.go:123] Gathering logs for coredns [fd26afc6c747] ...
	I0826 04:21:39.967938    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd26afc6c747"
	I0826 04:21:39.979838    4148 logs.go:123] Gathering logs for kube-controller-manager [72b91c706799] ...
	I0826 04:21:39.979859    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72b91c706799"
	I0826 04:21:39.997357    4148 logs.go:123] Gathering logs for storage-provisioner [efce6badf459] ...
	I0826 04:21:39.997370    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efce6badf459"
	I0826 04:21:40.012489    4148 logs.go:123] Gathering logs for etcd [bcbc2a012fc7] ...
	I0826 04:21:40.012499    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcbc2a012fc7"
	I0826 04:21:40.027024    4148 logs.go:123] Gathering logs for coredns [3f889327f434] ...
	I0826 04:21:40.027037    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f889327f434"
	I0826 04:21:40.038336    4148 logs.go:123] Gathering logs for coredns [a2d9258c2ed6] ...
	I0826 04:21:40.038349    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d9258c2ed6"
	I0826 04:21:40.050585    4148 logs.go:123] Gathering logs for Docker ...
	I0826 04:21:40.050596    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:21:40.074759    4148 logs.go:123] Gathering logs for kubelet ...
	I0826 04:21:40.074772    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:21:40.108926    4148 logs.go:123] Gathering logs for coredns [717ca754f70f] ...
	I0826 04:21:40.108935    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717ca754f70f"
	I0826 04:21:40.121066    4148 logs.go:123] Gathering logs for kube-scheduler [d278e2463601] ...
	I0826 04:21:40.121080    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d278e2463601"
	I0826 04:21:40.135878    4148 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:21:40.135892    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:21:40.172099    4148 logs.go:123] Gathering logs for container status ...
	I0826 04:21:40.172112    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:21:40.183920    4148 logs.go:123] Gathering logs for dmesg ...
	I0826 04:21:40.183934    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:21:40.188537    4148 logs.go:123] Gathering logs for kube-apiserver [6ff8d511b9ee] ...
	I0826 04:21:40.188544    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff8d511b9ee"
	I0826 04:21:40.203058    4148 logs.go:123] Gathering logs for kube-proxy [ccf3e861a584] ...
	I0826 04:21:40.203072    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccf3e861a584"
	I0826 04:21:42.717001    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:21:47.716625    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:21:47.716769    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0826 04:21:47.732332    4148 logs.go:276] 1 containers: [6ff8d511b9ee]
	I0826 04:21:47.732418    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0826 04:21:47.744365    4148 logs.go:276] 1 containers: [bcbc2a012fc7]
	I0826 04:21:47.744437    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0826 04:21:47.755652    4148 logs.go:276] 4 containers: [3f889327f434 717ca754f70f a2d9258c2ed6 fd26afc6c747]
	I0826 04:21:47.755719    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0826 04:21:47.765871    4148 logs.go:276] 1 containers: [d278e2463601]
	I0826 04:21:47.765944    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0826 04:21:47.777028    4148 logs.go:276] 1 containers: [ccf3e861a584]
	I0826 04:21:47.777096    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0826 04:21:47.787972    4148 logs.go:276] 1 containers: [72b91c706799]
	I0826 04:21:47.788038    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0826 04:21:47.798092    4148 logs.go:276] 0 containers: []
	W0826 04:21:47.798105    4148 logs.go:278] No container was found matching "kindnet"
	I0826 04:21:47.798160    4148 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0826 04:21:47.809130    4148 logs.go:276] 1 containers: [efce6badf459]
	I0826 04:21:47.809147    4148 logs.go:123] Gathering logs for etcd [bcbc2a012fc7] ...
	I0826 04:21:47.809153    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcbc2a012fc7"
	I0826 04:21:47.823769    4148 logs.go:123] Gathering logs for storage-provisioner [efce6badf459] ...
	I0826 04:21:47.823780    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efce6badf459"
	I0826 04:21:47.835490    4148 logs.go:123] Gathering logs for coredns [3f889327f434] ...
	I0826 04:21:47.835503    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f889327f434"
	I0826 04:21:47.847416    4148 logs.go:123] Gathering logs for coredns [717ca754f70f] ...
	I0826 04:21:47.847427    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717ca754f70f"
	I0826 04:21:47.859692    4148 logs.go:123] Gathering logs for kube-proxy [ccf3e861a584] ...
	I0826 04:21:47.859702    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccf3e861a584"
	I0826 04:21:47.874465    4148 logs.go:123] Gathering logs for kubelet ...
	I0826 04:21:47.874476    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 04:21:47.908128    4148 logs.go:123] Gathering logs for dmesg ...
	I0826 04:21:47.908140    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 04:21:47.912256    4148 logs.go:123] Gathering logs for describe nodes ...
	I0826 04:21:47.912264    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 04:21:47.946549    4148 logs.go:123] Gathering logs for coredns [fd26afc6c747] ...
	I0826 04:21:47.946560    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd26afc6c747"
	I0826 04:21:47.960034    4148 logs.go:123] Gathering logs for kube-scheduler [d278e2463601] ...
	I0826 04:21:47.960048    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d278e2463601"
	I0826 04:21:47.974988    4148 logs.go:123] Gathering logs for Docker ...
	I0826 04:21:47.975004    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0826 04:21:47.998948    4148 logs.go:123] Gathering logs for container status ...
	I0826 04:21:47.998957    4148 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 04:21:48.011076    4148 logs.go:123] Gathering logs for kube-apiserver [6ff8d511b9ee] ...
	I0826 04:21:48.011090    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff8d511b9ee"
	I0826 04:21:48.026389    4148 logs.go:123] Gathering logs for coredns [a2d9258c2ed6] ...
	I0826 04:21:48.026399    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d9258c2ed6"
	I0826 04:21:48.038211    4148 logs.go:123] Gathering logs for kube-controller-manager [72b91c706799] ...
	I0826 04:21:48.038224    4148 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72b91c706799"
	I0826 04:21:50.557358    4148 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0826 04:21:55.559461    4148 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0826 04:21:55.564588    4148 out.go:201] 
	W0826 04:21:55.567550    4148 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0826 04:21:55.567569    4148 out.go:270] * 
	* 
	W0826 04:21:55.569000    4148 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0826 04:21:55.583458    4148 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-743000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (595.56s)

                                                
                                    
x
+
TestPause/serial/Start (9.87s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-607000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-607000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.82707125s)

                                                
                                                
-- stdout --
	* [pause-607000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-607000" primary control-plane node in "pause-607000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-607000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-607000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-607000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-607000 -n pause-607000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-607000 -n pause-607000: exit status 7 (38.117792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-607000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-819000 --driver=qemu2 
E0826 04:22:46.841568    1539 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/functional-690000/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-819000 --driver=qemu2 : exit status 80 (9.8869635s)

                                                
                                                
-- stdout --
	* [NoKubernetes-819000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-819000" primary control-plane node in "NoKubernetes-819000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-819000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-819000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-819000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-819000 -n NoKubernetes-819000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-819000 -n NoKubernetes-819000: exit status 7 (64.311ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-819000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-819000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-819000 --no-kubernetes --driver=qemu2 : exit status 80 (5.25864525s)

                                                
                                                
-- stdout --
	* [NoKubernetes-819000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-819000
	* Restarting existing qemu2 VM for "NoKubernetes-819000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-819000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-819000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-819000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-819000 -n NoKubernetes-819000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-819000 -n NoKubernetes-819000: exit status 7 (62.979333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-819000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-819000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-819000 --no-kubernetes --driver=qemu2 : exit status 80 (5.270119459s)

                                                
                                                
-- stdout --
	* [NoKubernetes-819000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-819000
	* Restarting existing qemu2 VM for "NoKubernetes-819000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-819000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-819000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-819000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-819000 -n NoKubernetes-819000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-819000 -n NoKubernetes-819000: exit status 7 (65.121541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-819000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-819000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-819000 --driver=qemu2 : exit status 80 (6.575270834s)

                                                
                                                
-- stdout --
	* [NoKubernetes-819000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-819000
	* Restarting existing qemu2 VM for "NoKubernetes-819000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-819000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-819000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-819000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-819000 -n NoKubernetes-819000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-819000 -n NoKubernetes-819000: exit status 7 (32.846416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-819000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (6.61s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.8s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19501
- KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current197989293/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.80s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.58s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19501
- KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3727374413/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-336000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-336000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.773755375s)

                                                
                                                
-- stdout --
	* [auto-336000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-336000" primary control-plane node in "auto-336000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-336000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:23:43.974562    4615 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:23:43.974702    4615 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:23:43.974705    4615 out.go:358] Setting ErrFile to fd 2...
	I0826 04:23:43.974708    4615 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:23:43.974837    4615 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:23:43.975838    4615 out.go:352] Setting JSON to false
	I0826 04:23:43.992084    4615 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3186,"bootTime":1724668237,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0826 04:23:43.992150    4615 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0826 04:23:43.997715    4615 out.go:177] * [auto-336000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0826 04:23:44.005666    4615 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 04:23:44.005717    4615 notify.go:220] Checking for updates...
	I0826 04:23:44.012554    4615 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	I0826 04:23:44.015626    4615 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0826 04:23:44.018516    4615 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 04:23:44.021606    4615 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	I0826 04:23:44.024619    4615 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 04:23:44.027937    4615 config.go:182] Loaded profile config "cert-expiration-652000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:23:44.028008    4615 config.go:182] Loaded profile config "multinode-143000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:23:44.028064    4615 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 04:23:44.032579    4615 out.go:177] * Using the qemu2 driver based on user configuration
	I0826 04:23:44.038583    4615 start.go:297] selected driver: qemu2
	I0826 04:23:44.038592    4615 start.go:901] validating driver "qemu2" against <nil>
	I0826 04:23:44.038599    4615 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 04:23:44.040960    4615 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0826 04:23:44.043587    4615 out.go:177] * Automatically selected the socket_vmnet network
	I0826 04:23:44.046745    4615 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 04:23:44.046791    4615 cni.go:84] Creating CNI manager for ""
	I0826 04:23:44.046801    4615 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0826 04:23:44.046808    4615 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0826 04:23:44.046844    4615 start.go:340] cluster config:
	{Name:auto-336000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:auto-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 04:23:44.050501    4615 iso.go:125] acquiring lock: {Name:mk859bee1c7de58c8a10e75b01bd87b0e1e74bdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:23:44.057559    4615 out.go:177] * Starting "auto-336000" primary control-plane node in "auto-336000" cluster
	I0826 04:23:44.061621    4615 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0826 04:23:44.061635    4615 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0826 04:23:44.061642    4615 cache.go:56] Caching tarball of preloaded images
	I0826 04:23:44.061696    4615 preload.go:172] Found /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0826 04:23:44.061702    4615 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0826 04:23:44.061775    4615 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/auto-336000/config.json ...
	I0826 04:23:44.061804    4615 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/auto-336000/config.json: {Name:mk973c56deddf2a01b48ab0f77499b364b708d83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 04:23:44.062052    4615 start.go:360] acquireMachinesLock for auto-336000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:23:44.062093    4615 start.go:364] duration metric: took 31.916µs to acquireMachinesLock for "auto-336000"
	I0826 04:23:44.062107    4615 start.go:93] Provisioning new machine with config: &{Name:auto-336000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:auto-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0826 04:23:44.062159    4615 start.go:125] createHost starting for "" (driver="qemu2")
	I0826 04:23:44.070588    4615 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0826 04:23:44.088659    4615 start.go:159] libmachine.API.Create for "auto-336000" (driver="qemu2")
	I0826 04:23:44.088687    4615 client.go:168] LocalClient.Create starting
	I0826 04:23:44.088757    4615 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem
	I0826 04:23:44.088792    4615 main.go:141] libmachine: Decoding PEM data...
	I0826 04:23:44.088801    4615 main.go:141] libmachine: Parsing certificate...
	I0826 04:23:44.088834    4615 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/cert.pem
	I0826 04:23:44.088861    4615 main.go:141] libmachine: Decoding PEM data...
	I0826 04:23:44.088870    4615 main.go:141] libmachine: Parsing certificate...
	I0826 04:23:44.089252    4615 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19501-1045/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0826 04:23:44.252831    4615 main.go:141] libmachine: Creating SSH key...
	I0826 04:23:44.291173    4615 main.go:141] libmachine: Creating Disk image...
	I0826 04:23:44.291178    4615 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0826 04:23:44.291357    4615 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/auto-336000/disk.qcow2.raw /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/auto-336000/disk.qcow2
	I0826 04:23:44.300518    4615 main.go:141] libmachine: STDOUT: 
	I0826 04:23:44.300540    4615 main.go:141] libmachine: STDERR: 
	I0826 04:23:44.300584    4615 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/auto-336000/disk.qcow2 +20000M
	I0826 04:23:44.308402    4615 main.go:141] libmachine: STDOUT: Image resized.
	
	I0826 04:23:44.308421    4615 main.go:141] libmachine: STDERR: 
	I0826 04:23:44.308436    4615 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/auto-336000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/auto-336000/disk.qcow2
	I0826 04:23:44.308439    4615 main.go:141] libmachine: Starting QEMU VM...
	I0826 04:23:44.308450    4615 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:23:44.308478    4615 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/auto-336000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/auto-336000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/auto-336000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:bd:66:46:db:2c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/auto-336000/disk.qcow2
	I0826 04:23:44.310046    4615 main.go:141] libmachine: STDOUT: 
	I0826 04:23:44.310062    4615 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:23:44.310083    4615 client.go:171] duration metric: took 221.396042ms to LocalClient.Create
	I0826 04:23:46.312218    4615 start.go:128] duration metric: took 2.250089333s to createHost
	I0826 04:23:46.312292    4615 start.go:83] releasing machines lock for "auto-336000", held for 2.250240209s
	W0826 04:23:46.312400    4615 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:23:46.331767    4615 out.go:177] * Deleting "auto-336000" in qemu2 ...
	W0826 04:23:46.367479    4615 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:23:46.367512    4615 start.go:729] Will try again in 5 seconds ...
	I0826 04:23:51.369610    4615 start.go:360] acquireMachinesLock for auto-336000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:23:51.370029    4615 start.go:364] duration metric: took 339.417µs to acquireMachinesLock for "auto-336000"
	I0826 04:23:51.370151    4615 start.go:93] Provisioning new machine with config: &{Name:auto-336000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:auto-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0826 04:23:51.370423    4615 start.go:125] createHost starting for "" (driver="qemu2")
	I0826 04:23:51.379552    4615 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0826 04:23:51.430251    4615 start.go:159] libmachine.API.Create for "auto-336000" (driver="qemu2")
	I0826 04:23:51.430294    4615 client.go:168] LocalClient.Create starting
	I0826 04:23:51.430406    4615 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem
	I0826 04:23:51.430467    4615 main.go:141] libmachine: Decoding PEM data...
	I0826 04:23:51.430487    4615 main.go:141] libmachine: Parsing certificate...
	I0826 04:23:51.430544    4615 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/cert.pem
	I0826 04:23:51.430594    4615 main.go:141] libmachine: Decoding PEM data...
	I0826 04:23:51.430612    4615 main.go:141] libmachine: Parsing certificate...
	I0826 04:23:51.431080    4615 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19501-1045/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0826 04:23:51.612250    4615 main.go:141] libmachine: Creating SSH key...
	I0826 04:23:51.648813    4615 main.go:141] libmachine: Creating Disk image...
	I0826 04:23:51.648819    4615 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0826 04:23:51.648983    4615 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/auto-336000/disk.qcow2.raw /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/auto-336000/disk.qcow2
	I0826 04:23:51.658365    4615 main.go:141] libmachine: STDOUT: 
	I0826 04:23:51.658383    4615 main.go:141] libmachine: STDERR: 
	I0826 04:23:51.658430    4615 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/auto-336000/disk.qcow2 +20000M
	I0826 04:23:51.666338    4615 main.go:141] libmachine: STDOUT: Image resized.
	
	I0826 04:23:51.666355    4615 main.go:141] libmachine: STDERR: 
	I0826 04:23:51.666366    4615 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/auto-336000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/auto-336000/disk.qcow2
	I0826 04:23:51.666371    4615 main.go:141] libmachine: Starting QEMU VM...
	I0826 04:23:51.666383    4615 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:23:51.666413    4615 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/auto-336000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/auto-336000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/auto-336000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:a7:9c:d8:71:7f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/auto-336000/disk.qcow2
	I0826 04:23:51.667938    4615 main.go:141] libmachine: STDOUT: 
	I0826 04:23:51.667953    4615 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:23:51.667973    4615 client.go:171] duration metric: took 237.677666ms to LocalClient.Create
	I0826 04:23:53.670133    4615 start.go:128] duration metric: took 2.299697334s to createHost
	I0826 04:23:53.670228    4615 start.go:83] releasing machines lock for "auto-336000", held for 2.300225833s
	W0826 04:23:53.670594    4615 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-336000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-336000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:23:53.687252    4615 out.go:201] 
	W0826 04:23:53.690225    4615 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0826 04:23:53.690254    4615 out.go:270] * 
	* 
	W0826 04:23:53.692746    4615 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0826 04:23:53.706167    4615 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-336000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-336000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.82403775s)

                                                
                                                
-- stdout --
	* [calico-336000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-336000" primary control-plane node in "calico-336000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-336000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:23:55.874268    4728 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:23:55.874416    4728 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:23:55.874420    4728 out.go:358] Setting ErrFile to fd 2...
	I0826 04:23:55.874422    4728 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:23:55.874559    4728 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:23:55.875606    4728 out.go:352] Setting JSON to false
	I0826 04:23:55.891852    4728 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3198,"bootTime":1724668237,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0826 04:23:55.891929    4728 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0826 04:23:55.896882    4728 out.go:177] * [calico-336000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0826 04:23:55.903888    4728 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 04:23:55.903942    4728 notify.go:220] Checking for updates...
	I0826 04:23:55.910796    4728 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	I0826 04:23:55.913873    4728 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0826 04:23:55.916815    4728 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 04:23:55.919844    4728 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	I0826 04:23:55.922842    4728 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 04:23:55.926120    4728 config.go:182] Loaded profile config "cert-expiration-652000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:23:55.926193    4728 config.go:182] Loaded profile config "multinode-143000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:23:55.926242    4728 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 04:23:55.930824    4728 out.go:177] * Using the qemu2 driver based on user configuration
	I0826 04:23:55.936764    4728 start.go:297] selected driver: qemu2
	I0826 04:23:55.936772    4728 start.go:901] validating driver "qemu2" against <nil>
	I0826 04:23:55.936778    4728 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 04:23:55.939025    4728 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0826 04:23:55.941825    4728 out.go:177] * Automatically selected the socket_vmnet network
	I0826 04:23:55.944962    4728 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 04:23:55.945009    4728 cni.go:84] Creating CNI manager for "calico"
	I0826 04:23:55.945013    4728 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0826 04:23:55.945049    4728 start.go:340] cluster config:
	{Name:calico-336000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:calico-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 04:23:55.948635    4728 iso.go:125] acquiring lock: {Name:mk859bee1c7de58c8a10e75b01bd87b0e1e74bdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:23:55.956822    4728 out.go:177] * Starting "calico-336000" primary control-plane node in "calico-336000" cluster
	I0826 04:23:55.960878    4728 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0826 04:23:55.960893    4728 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0826 04:23:55.960904    4728 cache.go:56] Caching tarball of preloaded images
	I0826 04:23:55.960969    4728 preload.go:172] Found /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0826 04:23:55.960976    4728 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0826 04:23:55.961052    4728 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/calico-336000/config.json ...
	I0826 04:23:55.961064    4728 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/calico-336000/config.json: {Name:mkf7507852cf4f8d4e301654609625d343c138b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 04:23:55.961287    4728 start.go:360] acquireMachinesLock for calico-336000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:23:55.961320    4728 start.go:364] duration metric: took 27.166µs to acquireMachinesLock for "calico-336000"
	I0826 04:23:55.961333    4728 start.go:93] Provisioning new machine with config: &{Name:calico-336000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:calico-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0826 04:23:55.961362    4728 start.go:125] createHost starting for "" (driver="qemu2")
	I0826 04:23:55.968822    4728 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0826 04:23:55.985867    4728 start.go:159] libmachine.API.Create for "calico-336000" (driver="qemu2")
	I0826 04:23:55.985903    4728 client.go:168] LocalClient.Create starting
	I0826 04:23:55.985975    4728 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem
	I0826 04:23:55.986003    4728 main.go:141] libmachine: Decoding PEM data...
	I0826 04:23:55.986012    4728 main.go:141] libmachine: Parsing certificate...
	I0826 04:23:55.986047    4728 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/cert.pem
	I0826 04:23:55.986072    4728 main.go:141] libmachine: Decoding PEM data...
	I0826 04:23:55.986083    4728 main.go:141] libmachine: Parsing certificate...
	I0826 04:23:55.986415    4728 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19501-1045/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0826 04:23:56.149303    4728 main.go:141] libmachine: Creating SSH key...
	I0826 04:23:56.203368    4728 main.go:141] libmachine: Creating Disk image...
	I0826 04:23:56.203373    4728 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0826 04:23:56.203553    4728 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/calico-336000/disk.qcow2.raw /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/calico-336000/disk.qcow2
	I0826 04:23:56.212714    4728 main.go:141] libmachine: STDOUT: 
	I0826 04:23:56.212731    4728 main.go:141] libmachine: STDERR: 
	I0826 04:23:56.212773    4728 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/calico-336000/disk.qcow2 +20000M
	I0826 04:23:56.220714    4728 main.go:141] libmachine: STDOUT: Image resized.
	
	I0826 04:23:56.220730    4728 main.go:141] libmachine: STDERR: 
	I0826 04:23:56.220742    4728 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/calico-336000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/calico-336000/disk.qcow2
	I0826 04:23:56.220746    4728 main.go:141] libmachine: Starting QEMU VM...
	I0826 04:23:56.220757    4728 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:23:56.220786    4728 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/calico-336000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/calico-336000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/calico-336000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:60:4c:39:6d:52 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/calico-336000/disk.qcow2
	I0826 04:23:56.222413    4728 main.go:141] libmachine: STDOUT: 
	I0826 04:23:56.222427    4728 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:23:56.222445    4728 client.go:171] duration metric: took 236.542958ms to LocalClient.Create
	I0826 04:23:58.224656    4728 start.go:128] duration metric: took 2.263293167s to createHost
	I0826 04:23:58.224737    4728 start.go:83] releasing machines lock for "calico-336000", held for 2.26345925s
	W0826 04:23:58.224794    4728 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:23:58.234926    4728 out.go:177] * Deleting "calico-336000" in qemu2 ...
	W0826 04:23:58.274457    4728 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:23:58.274476    4728 start.go:729] Will try again in 5 seconds ...
	I0826 04:24:03.276570    4728 start.go:360] acquireMachinesLock for calico-336000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:24:03.276983    4728 start.go:364] duration metric: took 335.292µs to acquireMachinesLock for "calico-336000"
	I0826 04:24:03.277102    4728 start.go:93] Provisioning new machine with config: &{Name:calico-336000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:calico-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0826 04:24:03.277405    4728 start.go:125] createHost starting for "" (driver="qemu2")
	I0826 04:24:03.282971    4728 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0826 04:24:03.332592    4728 start.go:159] libmachine.API.Create for "calico-336000" (driver="qemu2")
	I0826 04:24:03.332639    4728 client.go:168] LocalClient.Create starting
	I0826 04:24:03.332743    4728 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem
	I0826 04:24:03.332805    4728 main.go:141] libmachine: Decoding PEM data...
	I0826 04:24:03.332822    4728 main.go:141] libmachine: Parsing certificate...
	I0826 04:24:03.332883    4728 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/cert.pem
	I0826 04:24:03.332929    4728 main.go:141] libmachine: Decoding PEM data...
	I0826 04:24:03.332944    4728 main.go:141] libmachine: Parsing certificate...
	I0826 04:24:03.337205    4728 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19501-1045/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0826 04:24:03.516824    4728 main.go:141] libmachine: Creating SSH key...
	I0826 04:24:03.599750    4728 main.go:141] libmachine: Creating Disk image...
	I0826 04:24:03.599755    4728 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0826 04:24:03.599918    4728 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/calico-336000/disk.qcow2.raw /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/calico-336000/disk.qcow2
	I0826 04:24:03.609225    4728 main.go:141] libmachine: STDOUT: 
	I0826 04:24:03.609247    4728 main.go:141] libmachine: STDERR: 
	I0826 04:24:03.609295    4728 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/calico-336000/disk.qcow2 +20000M
	I0826 04:24:03.617092    4728 main.go:141] libmachine: STDOUT: Image resized.
	
	I0826 04:24:03.617109    4728 main.go:141] libmachine: STDERR: 
	I0826 04:24:03.617119    4728 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/calico-336000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/calico-336000/disk.qcow2
	I0826 04:24:03.617123    4728 main.go:141] libmachine: Starting QEMU VM...
	I0826 04:24:03.617131    4728 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:24:03.617170    4728 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/calico-336000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/calico-336000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/calico-336000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:27:c2:67:9e:a4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/calico-336000/disk.qcow2
	I0826 04:24:03.618776    4728 main.go:141] libmachine: STDOUT: 
	I0826 04:24:03.618792    4728 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:24:03.618805    4728 client.go:171] duration metric: took 286.166041ms to LocalClient.Create
	I0826 04:24:05.620938    4728 start.go:128] duration metric: took 2.343554625s to createHost
	I0826 04:24:05.621000    4728 start.go:83] releasing machines lock for "calico-336000", held for 2.34404925s
	W0826 04:24:05.621356    4728 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-336000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-336000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:24:05.636073    4728 out.go:201] 
	W0826 04:24:05.639979    4728 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0826 04:24:05.640031    4728 out.go:270] * 
	* 
	W0826 04:24:05.642710    4728 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0826 04:24:05.656930    4728 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-336000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-336000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.817301042s)

                                                
                                                
-- stdout --
	* [custom-flannel-336000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-336000" primary control-plane node in "custom-flannel-336000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-336000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:24:08.044535    4846 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:24:08.044649    4846 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:24:08.044652    4846 out.go:358] Setting ErrFile to fd 2...
	I0826 04:24:08.044654    4846 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:24:08.044770    4846 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:24:08.045843    4846 out.go:352] Setting JSON to false
	I0826 04:24:08.061809    4846 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3211,"bootTime":1724668237,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0826 04:24:08.061870    4846 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0826 04:24:08.067100    4846 out.go:177] * [custom-flannel-336000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0826 04:24:08.071027    4846 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 04:24:08.071032    4846 notify.go:220] Checking for updates...
	I0826 04:24:08.077980    4846 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	I0826 04:24:08.081012    4846 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0826 04:24:08.087920    4846 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 04:24:08.091007    4846 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	I0826 04:24:08.093960    4846 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 04:24:08.097251    4846 config.go:182] Loaded profile config "cert-expiration-652000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:24:08.097332    4846 config.go:182] Loaded profile config "multinode-143000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:24:08.097390    4846 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 04:24:08.101919    4846 out.go:177] * Using the qemu2 driver based on user configuration
	I0826 04:24:08.108930    4846 start.go:297] selected driver: qemu2
	I0826 04:24:08.108936    4846 start.go:901] validating driver "qemu2" against <nil>
	I0826 04:24:08.108942    4846 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 04:24:08.111297    4846 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0826 04:24:08.115896    4846 out.go:177] * Automatically selected the socket_vmnet network
	I0826 04:24:08.119057    4846 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 04:24:08.119088    4846 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0826 04:24:08.119096    4846 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0826 04:24:08.119128    4846 start.go:340] cluster config:
	{Name:custom-flannel-336000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 04:24:08.122970    4846 iso.go:125] acquiring lock: {Name:mk859bee1c7de58c8a10e75b01bd87b0e1e74bdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:24:08.131942    4846 out.go:177] * Starting "custom-flannel-336000" primary control-plane node in "custom-flannel-336000" cluster
	I0826 04:24:08.135828    4846 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0826 04:24:08.135843    4846 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0826 04:24:08.135852    4846 cache.go:56] Caching tarball of preloaded images
	I0826 04:24:08.135914    4846 preload.go:172] Found /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0826 04:24:08.135920    4846 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0826 04:24:08.135977    4846 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/custom-flannel-336000/config.json ...
	I0826 04:24:08.135989    4846 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/custom-flannel-336000/config.json: {Name:mkebfd000df60404c287366a7feaf23e41086710 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 04:24:08.136443    4846 start.go:360] acquireMachinesLock for custom-flannel-336000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:24:08.136480    4846 start.go:364] duration metric: took 29.083µs to acquireMachinesLock for "custom-flannel-336000"
	I0826 04:24:08.136496    4846 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-336000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0826 04:24:08.136530    4846 start.go:125] createHost starting for "" (driver="qemu2")
	I0826 04:24:08.144921    4846 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0826 04:24:08.163084    4846 start.go:159] libmachine.API.Create for "custom-flannel-336000" (driver="qemu2")
	I0826 04:24:08.163110    4846 client.go:168] LocalClient.Create starting
	I0826 04:24:08.163180    4846 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem
	I0826 04:24:08.163211    4846 main.go:141] libmachine: Decoding PEM data...
	I0826 04:24:08.163221    4846 main.go:141] libmachine: Parsing certificate...
	I0826 04:24:08.163261    4846 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/cert.pem
	I0826 04:24:08.163286    4846 main.go:141] libmachine: Decoding PEM data...
	I0826 04:24:08.163294    4846 main.go:141] libmachine: Parsing certificate...
	I0826 04:24:08.163805    4846 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19501-1045/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0826 04:24:08.327626    4846 main.go:141] libmachine: Creating SSH key...
	I0826 04:24:08.414131    4846 main.go:141] libmachine: Creating Disk image...
	I0826 04:24:08.414137    4846 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0826 04:24:08.414313    4846 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/custom-flannel-336000/disk.qcow2.raw /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/custom-flannel-336000/disk.qcow2
	I0826 04:24:08.423493    4846 main.go:141] libmachine: STDOUT: 
	I0826 04:24:08.423512    4846 main.go:141] libmachine: STDERR: 
	I0826 04:24:08.423551    4846 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/custom-flannel-336000/disk.qcow2 +20000M
	I0826 04:24:08.431374    4846 main.go:141] libmachine: STDOUT: Image resized.
	
	I0826 04:24:08.431391    4846 main.go:141] libmachine: STDERR: 
	I0826 04:24:08.431408    4846 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/custom-flannel-336000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/custom-flannel-336000/disk.qcow2
	I0826 04:24:08.431413    4846 main.go:141] libmachine: Starting QEMU VM...
	I0826 04:24:08.431425    4846 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:24:08.431449    4846 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/custom-flannel-336000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/custom-flannel-336000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/custom-flannel-336000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:8b:d3:15:f4:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/custom-flannel-336000/disk.qcow2
	I0826 04:24:08.433062    4846 main.go:141] libmachine: STDOUT: 
	I0826 04:24:08.433078    4846 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:24:08.433098    4846 client.go:171] duration metric: took 269.990625ms to LocalClient.Create
	I0826 04:24:10.435314    4846 start.go:128] duration metric: took 2.298796875s to createHost
	I0826 04:24:10.435388    4846 start.go:83] releasing machines lock for "custom-flannel-336000", held for 2.2989505s
	W0826 04:24:10.435446    4846 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:24:10.450806    4846 out.go:177] * Deleting "custom-flannel-336000" in qemu2 ...
	W0826 04:24:10.484584    4846 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:24:10.484607    4846 start.go:729] Will try again in 5 seconds ...
	I0826 04:24:15.486699    4846 start.go:360] acquireMachinesLock for custom-flannel-336000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:24:15.487212    4846 start.go:364] duration metric: took 430.958µs to acquireMachinesLock for "custom-flannel-336000"
	I0826 04:24:15.487355    4846 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-336000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0826 04:24:15.487639    4846 start.go:125] createHost starting for "" (driver="qemu2")
	I0826 04:24:15.507094    4846 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0826 04:24:15.558923    4846 start.go:159] libmachine.API.Create for "custom-flannel-336000" (driver="qemu2")
	I0826 04:24:15.558976    4846 client.go:168] LocalClient.Create starting
	I0826 04:24:15.559088    4846 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem
	I0826 04:24:15.559143    4846 main.go:141] libmachine: Decoding PEM data...
	I0826 04:24:15.559159    4846 main.go:141] libmachine: Parsing certificate...
	I0826 04:24:15.559221    4846 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/cert.pem
	I0826 04:24:15.559275    4846 main.go:141] libmachine: Decoding PEM data...
	I0826 04:24:15.559289    4846 main.go:141] libmachine: Parsing certificate...
	I0826 04:24:15.559805    4846 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19501-1045/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0826 04:24:15.733290    4846 main.go:141] libmachine: Creating SSH key...
	I0826 04:24:15.769920    4846 main.go:141] libmachine: Creating Disk image...
	I0826 04:24:15.769925    4846 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0826 04:24:15.770091    4846 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/custom-flannel-336000/disk.qcow2.raw /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/custom-flannel-336000/disk.qcow2
	I0826 04:24:15.779108    4846 main.go:141] libmachine: STDOUT: 
	I0826 04:24:15.779131    4846 main.go:141] libmachine: STDERR: 
	I0826 04:24:15.779182    4846 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/custom-flannel-336000/disk.qcow2 +20000M
	I0826 04:24:15.787106    4846 main.go:141] libmachine: STDOUT: Image resized.
	
	I0826 04:24:15.787120    4846 main.go:141] libmachine: STDERR: 
	I0826 04:24:15.787131    4846 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/custom-flannel-336000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/custom-flannel-336000/disk.qcow2
	I0826 04:24:15.787137    4846 main.go:141] libmachine: Starting QEMU VM...
	I0826 04:24:15.787151    4846 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:24:15.787175    4846 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/custom-flannel-336000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/custom-flannel-336000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/custom-flannel-336000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:ed:31:9e:f5:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/custom-flannel-336000/disk.qcow2
	I0826 04:24:15.788782    4846 main.go:141] libmachine: STDOUT: 
	I0826 04:24:15.788803    4846 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:24:15.788817    4846 client.go:171] duration metric: took 229.839084ms to LocalClient.Create
	I0826 04:24:17.790930    4846 start.go:128] duration metric: took 2.303315084s to createHost
	I0826 04:24:17.791015    4846 start.go:83] releasing machines lock for "custom-flannel-336000", held for 2.303822709s
	W0826 04:24:17.791367    4846 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-336000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-336000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:24:17.799952    4846 out.go:201] 
	W0826 04:24:17.808062    4846 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0826 04:24:17.808121    4846 out.go:270] * 
	* 
	W0826 04:24:17.810883    4846 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0826 04:24:17.819915    4846 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (10.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-336000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-336000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (10.003586292s)

                                                
                                                
-- stdout --
	* [false-336000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-336000" primary control-plane node in "false-336000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-336000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:24:20.221712    4965 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:24:20.221853    4965 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:24:20.221856    4965 out.go:358] Setting ErrFile to fd 2...
	I0826 04:24:20.221858    4965 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:24:20.221985    4965 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:24:20.223002    4965 out.go:352] Setting JSON to false
	I0826 04:24:20.239109    4965 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3223,"bootTime":1724668237,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0826 04:24:20.239179    4965 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0826 04:24:20.245637    4965 out.go:177] * [false-336000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0826 04:24:20.253670    4965 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 04:24:20.253709    4965 notify.go:220] Checking for updates...
	I0826 04:24:20.262530    4965 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	I0826 04:24:20.265592    4965 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0826 04:24:20.268488    4965 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 04:24:20.271590    4965 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	I0826 04:24:20.274645    4965 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 04:24:20.276522    4965 config.go:182] Loaded profile config "cert-expiration-652000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:24:20.276590    4965 config.go:182] Loaded profile config "multinode-143000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:24:20.276638    4965 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 04:24:20.279625    4965 out.go:177] * Using the qemu2 driver based on user configuration
	I0826 04:24:20.286459    4965 start.go:297] selected driver: qemu2
	I0826 04:24:20.286465    4965 start.go:901] validating driver "qemu2" against <nil>
	I0826 04:24:20.286471    4965 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 04:24:20.288700    4965 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0826 04:24:20.292599    4965 out.go:177] * Automatically selected the socket_vmnet network
	I0826 04:24:20.295713    4965 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 04:24:20.295731    4965 cni.go:84] Creating CNI manager for "false"
	I0826 04:24:20.295758    4965 start.go:340] cluster config:
	{Name:false-336000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:false-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 04:24:20.299189    4965 iso.go:125] acquiring lock: {Name:mk859bee1c7de58c8a10e75b01bd87b0e1e74bdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:24:20.303523    4965 out.go:177] * Starting "false-336000" primary control-plane node in "false-336000" cluster
	I0826 04:24:20.311647    4965 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0826 04:24:20.311672    4965 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0826 04:24:20.311683    4965 cache.go:56] Caching tarball of preloaded images
	I0826 04:24:20.311755    4965 preload.go:172] Found /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0826 04:24:20.311760    4965 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0826 04:24:20.311829    4965 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/false-336000/config.json ...
	I0826 04:24:20.311843    4965 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/false-336000/config.json: {Name:mkbb9f35f641b417563ccb84ff7af6552b9e612b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 04:24:20.312228    4965 start.go:360] acquireMachinesLock for false-336000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:24:20.312261    4965 start.go:364] duration metric: took 27.792µs to acquireMachinesLock for "false-336000"
	I0826 04:24:20.312272    4965 start.go:93] Provisioning new machine with config: &{Name:false-336000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:false-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0826 04:24:20.312300    4965 start.go:125] createHost starting for "" (driver="qemu2")
	I0826 04:24:20.320387    4965 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0826 04:24:20.337495    4965 start.go:159] libmachine.API.Create for "false-336000" (driver="qemu2")
	I0826 04:24:20.337529    4965 client.go:168] LocalClient.Create starting
	I0826 04:24:20.337583    4965 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem
	I0826 04:24:20.337613    4965 main.go:141] libmachine: Decoding PEM data...
	I0826 04:24:20.337623    4965 main.go:141] libmachine: Parsing certificate...
	I0826 04:24:20.337670    4965 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/cert.pem
	I0826 04:24:20.337692    4965 main.go:141] libmachine: Decoding PEM data...
	I0826 04:24:20.337703    4965 main.go:141] libmachine: Parsing certificate...
	I0826 04:24:20.338222    4965 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19501-1045/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0826 04:24:20.501982    4965 main.go:141] libmachine: Creating SSH key...
	I0826 04:24:20.780056    4965 main.go:141] libmachine: Creating Disk image...
	I0826 04:24:20.780068    4965 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0826 04:24:20.780272    4965 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/false-336000/disk.qcow2.raw /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/false-336000/disk.qcow2
	I0826 04:24:20.789909    4965 main.go:141] libmachine: STDOUT: 
	I0826 04:24:20.789930    4965 main.go:141] libmachine: STDERR: 
	I0826 04:24:20.789979    4965 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/false-336000/disk.qcow2 +20000M
	I0826 04:24:20.798056    4965 main.go:141] libmachine: STDOUT: Image resized.
	
	I0826 04:24:20.798086    4965 main.go:141] libmachine: STDERR: 
	I0826 04:24:20.798098    4965 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/false-336000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/false-336000/disk.qcow2
	I0826 04:24:20.798103    4965 main.go:141] libmachine: Starting QEMU VM...
	I0826 04:24:20.798117    4965 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:24:20.798146    4965 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/false-336000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/false-336000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/false-336000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:f8:87:0f:c8:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/false-336000/disk.qcow2
	I0826 04:24:20.799807    4965 main.go:141] libmachine: STDOUT: 
	I0826 04:24:20.799825    4965 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:24:20.799844    4965 client.go:171] duration metric: took 462.319584ms to LocalClient.Create
	I0826 04:24:22.801981    4965 start.go:128] duration metric: took 2.489715834s to createHost
	I0826 04:24:22.802043    4965 start.go:83] releasing machines lock for "false-336000", held for 2.489830375s
	W0826 04:24:22.802108    4965 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:24:22.813211    4965 out.go:177] * Deleting "false-336000" in qemu2 ...
	W0826 04:24:22.850279    4965 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:24:22.850298    4965 start.go:729] Will try again in 5 seconds ...
	I0826 04:24:27.852373    4965 start.go:360] acquireMachinesLock for false-336000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:24:27.852888    4965 start.go:364] duration metric: took 395.167µs to acquireMachinesLock for "false-336000"
	I0826 04:24:27.853062    4965 start.go:93] Provisioning new machine with config: &{Name:false-336000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:false-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0826 04:24:27.853429    4965 start.go:125] createHost starting for "" (driver="qemu2")
	I0826 04:24:27.872009    4965 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0826 04:24:27.922275    4965 start.go:159] libmachine.API.Create for "false-336000" (driver="qemu2")
	I0826 04:24:27.922327    4965 client.go:168] LocalClient.Create starting
	I0826 04:24:27.922458    4965 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem
	I0826 04:24:27.922525    4965 main.go:141] libmachine: Decoding PEM data...
	I0826 04:24:27.922543    4965 main.go:141] libmachine: Parsing certificate...
	I0826 04:24:27.922612    4965 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/cert.pem
	I0826 04:24:27.922660    4965 main.go:141] libmachine: Decoding PEM data...
	I0826 04:24:27.922671    4965 main.go:141] libmachine: Parsing certificate...
	I0826 04:24:27.923218    4965 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19501-1045/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0826 04:24:28.094689    4965 main.go:141] libmachine: Creating SSH key...
	I0826 04:24:28.126343    4965 main.go:141] libmachine: Creating Disk image...
	I0826 04:24:28.126348    4965 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0826 04:24:28.126515    4965 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/false-336000/disk.qcow2.raw /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/false-336000/disk.qcow2
	I0826 04:24:28.135596    4965 main.go:141] libmachine: STDOUT: 
	I0826 04:24:28.135614    4965 main.go:141] libmachine: STDERR: 
	I0826 04:24:28.135664    4965 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/false-336000/disk.qcow2 +20000M
	I0826 04:24:28.143505    4965 main.go:141] libmachine: STDOUT: Image resized.
	
	I0826 04:24:28.143520    4965 main.go:141] libmachine: STDERR: 
	I0826 04:24:28.143532    4965 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/false-336000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/false-336000/disk.qcow2
	I0826 04:24:28.143537    4965 main.go:141] libmachine: Starting QEMU VM...
	I0826 04:24:28.143545    4965 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:24:28.143590    4965 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/false-336000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/false-336000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/false-336000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:bd:88:9f:2f:ba -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/false-336000/disk.qcow2
	I0826 04:24:28.145238    4965 main.go:141] libmachine: STDOUT: 
	I0826 04:24:28.145253    4965 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:24:28.145266    4965 client.go:171] duration metric: took 222.937583ms to LocalClient.Create
	I0826 04:24:30.147447    4965 start.go:128] duration metric: took 2.294012084s to createHost
	I0826 04:24:30.147634    4965 start.go:83] releasing machines lock for "false-336000", held for 2.29474075s
	W0826 04:24:30.147937    4965 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-336000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-336000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:24:30.163741    4965 out.go:201] 
	W0826 04:24:30.168711    4965 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0826 04:24:30.168739    4965 out.go:270] * 
	* 
	W0826 04:24:30.171079    4965 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0826 04:24:30.181745    4965 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (10.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-336000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-336000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.849922333s)

                                                
                                                
-- stdout --
	* [kindnet-336000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-336000" primary control-plane node in "kindnet-336000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-336000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:24:32.412341    5074 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:24:32.412474    5074 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:24:32.412483    5074 out.go:358] Setting ErrFile to fd 2...
	I0826 04:24:32.412486    5074 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:24:32.412634    5074 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:24:32.413656    5074 out.go:352] Setting JSON to false
	I0826 04:24:32.429881    5074 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3235,"bootTime":1724668237,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0826 04:24:32.429960    5074 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0826 04:24:32.436171    5074 out.go:177] * [kindnet-336000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0826 04:24:32.444984    5074 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 04:24:32.445069    5074 notify.go:220] Checking for updates...
	I0826 04:24:32.452965    5074 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	I0826 04:24:32.455975    5074 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0826 04:24:32.458949    5074 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 04:24:32.461997    5074 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	I0826 04:24:32.464963    5074 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 04:24:32.468332    5074 config.go:182] Loaded profile config "cert-expiration-652000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:24:32.468409    5074 config.go:182] Loaded profile config "multinode-143000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:24:32.468457    5074 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 04:24:32.472965    5074 out.go:177] * Using the qemu2 driver based on user configuration
	I0826 04:24:32.479944    5074 start.go:297] selected driver: qemu2
	I0826 04:24:32.479952    5074 start.go:901] validating driver "qemu2" against <nil>
	I0826 04:24:32.479959    5074 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 04:24:32.482373    5074 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0826 04:24:32.486940    5074 out.go:177] * Automatically selected the socket_vmnet network
	I0826 04:24:32.489954    5074 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 04:24:32.489989    5074 cni.go:84] Creating CNI manager for "kindnet"
	I0826 04:24:32.489993    5074 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0826 04:24:32.490023    5074 start.go:340] cluster config:
	{Name:kindnet-336000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kindnet-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 04:24:32.493859    5074 iso.go:125] acquiring lock: {Name:mk859bee1c7de58c8a10e75b01bd87b0e1e74bdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:24:32.501850    5074 out.go:177] * Starting "kindnet-336000" primary control-plane node in "kindnet-336000" cluster
	I0826 04:24:32.506006    5074 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0826 04:24:32.506024    5074 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0826 04:24:32.506036    5074 cache.go:56] Caching tarball of preloaded images
	I0826 04:24:32.506111    5074 preload.go:172] Found /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0826 04:24:32.506117    5074 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0826 04:24:32.506180    5074 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/kindnet-336000/config.json ...
	I0826 04:24:32.506192    5074 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/kindnet-336000/config.json: {Name:mk5cc015459bf2dc6a15e2dcc9149ef70b97ae13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 04:24:32.506436    5074 start.go:360] acquireMachinesLock for kindnet-336000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:24:32.506472    5074 start.go:364] duration metric: took 29.708µs to acquireMachinesLock for "kindnet-336000"
	I0826 04:24:32.506484    5074 start.go:93] Provisioning new machine with config: &{Name:kindnet-336000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kindnet-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0826 04:24:32.506516    5074 start.go:125] createHost starting for "" (driver="qemu2")
	I0826 04:24:32.514934    5074 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0826 04:24:32.533785    5074 start.go:159] libmachine.API.Create for "kindnet-336000" (driver="qemu2")
	I0826 04:24:32.533808    5074 client.go:168] LocalClient.Create starting
	I0826 04:24:32.533866    5074 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem
	I0826 04:24:32.533897    5074 main.go:141] libmachine: Decoding PEM data...
	I0826 04:24:32.533910    5074 main.go:141] libmachine: Parsing certificate...
	I0826 04:24:32.533947    5074 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/cert.pem
	I0826 04:24:32.533971    5074 main.go:141] libmachine: Decoding PEM data...
	I0826 04:24:32.533982    5074 main.go:141] libmachine: Parsing certificate...
	I0826 04:24:32.534457    5074 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19501-1045/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0826 04:24:32.696928    5074 main.go:141] libmachine: Creating SSH key...
	I0826 04:24:32.775752    5074 main.go:141] libmachine: Creating Disk image...
	I0826 04:24:32.775757    5074 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0826 04:24:32.775924    5074 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kindnet-336000/disk.qcow2.raw /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kindnet-336000/disk.qcow2
	I0826 04:24:32.785001    5074 main.go:141] libmachine: STDOUT: 
	I0826 04:24:32.785020    5074 main.go:141] libmachine: STDERR: 
	I0826 04:24:32.785069    5074 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kindnet-336000/disk.qcow2 +20000M
	I0826 04:24:32.793009    5074 main.go:141] libmachine: STDOUT: Image resized.
	
	I0826 04:24:32.793029    5074 main.go:141] libmachine: STDERR: 
	I0826 04:24:32.793050    5074 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kindnet-336000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kindnet-336000/disk.qcow2
	I0826 04:24:32.793055    5074 main.go:141] libmachine: Starting QEMU VM...
	I0826 04:24:32.793065    5074 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:24:32.793089    5074 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kindnet-336000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kindnet-336000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kindnet-336000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:d5:71:5a:80:91 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kindnet-336000/disk.qcow2
	I0826 04:24:32.794685    5074 main.go:141] libmachine: STDOUT: 
	I0826 04:24:32.794702    5074 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:24:32.794719    5074 client.go:171] duration metric: took 260.912292ms to LocalClient.Create
	I0826 04:24:34.796842    5074 start.go:128] duration metric: took 2.290356667s to createHost
	I0826 04:24:34.796899    5074 start.go:83] releasing machines lock for "kindnet-336000", held for 2.290470541s
	W0826 04:24:34.797031    5074 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:24:34.813369    5074 out.go:177] * Deleting "kindnet-336000" in qemu2 ...
	W0826 04:24:34.844140    5074 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:24:34.844165    5074 start.go:729] Will try again in 5 seconds ...
	I0826 04:24:39.846191    5074 start.go:360] acquireMachinesLock for kindnet-336000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:24:39.846664    5074 start.go:364] duration metric: took 378.083µs to acquireMachinesLock for "kindnet-336000"
	I0826 04:24:39.846812    5074 start.go:93] Provisioning new machine with config: &{Name:kindnet-336000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kindnet-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0826 04:24:39.847094    5074 start.go:125] createHost starting for "" (driver="qemu2")
	I0826 04:24:39.865689    5074 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0826 04:24:39.917785    5074 start.go:159] libmachine.API.Create for "kindnet-336000" (driver="qemu2")
	I0826 04:24:39.917832    5074 client.go:168] LocalClient.Create starting
	I0826 04:24:39.917957    5074 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem
	I0826 04:24:39.918024    5074 main.go:141] libmachine: Decoding PEM data...
	I0826 04:24:39.918047    5074 main.go:141] libmachine: Parsing certificate...
	I0826 04:24:39.918105    5074 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/cert.pem
	I0826 04:24:39.918153    5074 main.go:141] libmachine: Decoding PEM data...
	I0826 04:24:39.918163    5074 main.go:141] libmachine: Parsing certificate...
	I0826 04:24:39.918704    5074 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19501-1045/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0826 04:24:40.090807    5074 main.go:141] libmachine: Creating SSH key...
	I0826 04:24:40.170832    5074 main.go:141] libmachine: Creating Disk image...
	I0826 04:24:40.170838    5074 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0826 04:24:40.171018    5074 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kindnet-336000/disk.qcow2.raw /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kindnet-336000/disk.qcow2
	I0826 04:24:40.180324    5074 main.go:141] libmachine: STDOUT: 
	I0826 04:24:40.180342    5074 main.go:141] libmachine: STDERR: 
	I0826 04:24:40.180406    5074 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kindnet-336000/disk.qcow2 +20000M
	I0826 04:24:40.188506    5074 main.go:141] libmachine: STDOUT: Image resized.
	
	I0826 04:24:40.188526    5074 main.go:141] libmachine: STDERR: 
	I0826 04:24:40.188537    5074 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kindnet-336000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kindnet-336000/disk.qcow2
	I0826 04:24:40.188542    5074 main.go:141] libmachine: Starting QEMU VM...
	I0826 04:24:40.188551    5074 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:24:40.188577    5074 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kindnet-336000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kindnet-336000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kindnet-336000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:ff:6a:2b:b0:03 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kindnet-336000/disk.qcow2
	I0826 04:24:40.190222    5074 main.go:141] libmachine: STDOUT: 
	I0826 04:24:40.190238    5074 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:24:40.190258    5074 client.go:171] duration metric: took 272.425ms to LocalClient.Create
	I0826 04:24:42.192436    5074 start.go:128] duration metric: took 2.345307083s to createHost
	I0826 04:24:42.192499    5074 start.go:83] releasing machines lock for "kindnet-336000", held for 2.34584975s
	W0826 04:24:42.192785    5074 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-336000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-336000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:24:42.206668    5074 out.go:201] 
	W0826 04:24:42.211661    5074 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0826 04:24:42.211707    5074 out.go:270] * 
	* 
	W0826 04:24:42.214167    5074 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0826 04:24:42.220630    5074 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-336000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-336000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.827870625s)

                                                
                                                
-- stdout --
	* [flannel-336000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-336000" primary control-plane node in "flannel-336000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-336000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:24:44.539465    5190 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:24:44.539594    5190 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:24:44.539597    5190 out.go:358] Setting ErrFile to fd 2...
	I0826 04:24:44.539600    5190 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:24:44.539739    5190 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:24:44.540750    5190 out.go:352] Setting JSON to false
	I0826 04:24:44.557035    5190 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3247,"bootTime":1724668237,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0826 04:24:44.557104    5190 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0826 04:24:44.563775    5190 out.go:177] * [flannel-336000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0826 04:24:44.571602    5190 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 04:24:44.571657    5190 notify.go:220] Checking for updates...
	I0826 04:24:44.579531    5190 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	I0826 04:24:44.582578    5190 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0826 04:24:44.586602    5190 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 04:24:44.589574    5190 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	I0826 04:24:44.592522    5190 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 04:24:44.595976    5190 config.go:182] Loaded profile config "cert-expiration-652000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:24:44.596049    5190 config.go:182] Loaded profile config "multinode-143000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:24:44.596100    5190 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 04:24:44.600479    5190 out.go:177] * Using the qemu2 driver based on user configuration
	I0826 04:24:44.607578    5190 start.go:297] selected driver: qemu2
	I0826 04:24:44.607586    5190 start.go:901] validating driver "qemu2" against <nil>
	I0826 04:24:44.607594    5190 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 04:24:44.609980    5190 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0826 04:24:44.614466    5190 out.go:177] * Automatically selected the socket_vmnet network
	I0826 04:24:44.617639    5190 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 04:24:44.617678    5190 cni.go:84] Creating CNI manager for "flannel"
	I0826 04:24:44.617682    5190 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0826 04:24:44.617713    5190 start.go:340] cluster config:
	{Name:flannel-336000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:flannel-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 04:24:44.621525    5190 iso.go:125] acquiring lock: {Name:mk859bee1c7de58c8a10e75b01bd87b0e1e74bdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:24:44.628521    5190 out.go:177] * Starting "flannel-336000" primary control-plane node in "flannel-336000" cluster
	I0826 04:24:44.632568    5190 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0826 04:24:44.632582    5190 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0826 04:24:44.632590    5190 cache.go:56] Caching tarball of preloaded images
	I0826 04:24:44.632647    5190 preload.go:172] Found /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0826 04:24:44.632662    5190 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0826 04:24:44.632728    5190 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/flannel-336000/config.json ...
	I0826 04:24:44.632742    5190 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/flannel-336000/config.json: {Name:mk076b356ebffa15e54dd933281e9427fd253f2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 04:24:44.633170    5190 start.go:360] acquireMachinesLock for flannel-336000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:24:44.633212    5190 start.go:364] duration metric: took 33.584µs to acquireMachinesLock for "flannel-336000"
	I0826 04:24:44.633225    5190 start.go:93] Provisioning new machine with config: &{Name:flannel-336000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:flannel-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0826 04:24:44.633261    5190 start.go:125] createHost starting for "" (driver="qemu2")
	I0826 04:24:44.637588    5190 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0826 04:24:44.656341    5190 start.go:159] libmachine.API.Create for "flannel-336000" (driver="qemu2")
	I0826 04:24:44.656372    5190 client.go:168] LocalClient.Create starting
	I0826 04:24:44.656429    5190 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem
	I0826 04:24:44.656462    5190 main.go:141] libmachine: Decoding PEM data...
	I0826 04:24:44.656471    5190 main.go:141] libmachine: Parsing certificate...
	I0826 04:24:44.656508    5190 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/cert.pem
	I0826 04:24:44.656538    5190 main.go:141] libmachine: Decoding PEM data...
	I0826 04:24:44.656546    5190 main.go:141] libmachine: Parsing certificate...
	I0826 04:24:44.657047    5190 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19501-1045/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0826 04:24:44.823183    5190 main.go:141] libmachine: Creating SSH key...
	I0826 04:24:44.897034    5190 main.go:141] libmachine: Creating Disk image...
	I0826 04:24:44.897040    5190 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0826 04:24:44.897223    5190 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/flannel-336000/disk.qcow2.raw /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/flannel-336000/disk.qcow2
	I0826 04:24:44.906496    5190 main.go:141] libmachine: STDOUT: 
	I0826 04:24:44.906513    5190 main.go:141] libmachine: STDERR: 
	I0826 04:24:44.906568    5190 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/flannel-336000/disk.qcow2 +20000M
	I0826 04:24:44.914372    5190 main.go:141] libmachine: STDOUT: Image resized.
	
	I0826 04:24:44.914402    5190 main.go:141] libmachine: STDERR: 
	I0826 04:24:44.914421    5190 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/flannel-336000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/flannel-336000/disk.qcow2
	I0826 04:24:44.914425    5190 main.go:141] libmachine: Starting QEMU VM...
	I0826 04:24:44.914437    5190 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:24:44.914461    5190 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/flannel-336000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/flannel-336000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/flannel-336000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:4c:32:95:29:e2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/flannel-336000/disk.qcow2
	I0826 04:24:44.916049    5190 main.go:141] libmachine: STDOUT: 
	I0826 04:24:44.916064    5190 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:24:44.916083    5190 client.go:171] duration metric: took 259.712167ms to LocalClient.Create
	I0826 04:24:46.918201    5190 start.go:128] duration metric: took 2.28497425s to createHost
	I0826 04:24:46.918277    5190 start.go:83] releasing machines lock for "flannel-336000", held for 2.285107583s
	W0826 04:24:46.918412    5190 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:24:46.925802    5190 out.go:177] * Deleting "flannel-336000" in qemu2 ...
	W0826 04:24:46.955949    5190 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:24:46.955977    5190 start.go:729] Will try again in 5 seconds ...
	I0826 04:24:51.958039    5190 start.go:360] acquireMachinesLock for flannel-336000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:24:51.958541    5190 start.go:364] duration metric: took 388.25µs to acquireMachinesLock for "flannel-336000"
	I0826 04:24:51.958713    5190 start.go:93] Provisioning new machine with config: &{Name:flannel-336000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:flannel-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0826 04:24:51.959062    5190 start.go:125] createHost starting for "" (driver="qemu2")
	I0826 04:24:51.968743    5190 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0826 04:24:52.019148    5190 start.go:159] libmachine.API.Create for "flannel-336000" (driver="qemu2")
	I0826 04:24:52.019191    5190 client.go:168] LocalClient.Create starting
	I0826 04:24:52.019290    5190 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem
	I0826 04:24:52.019348    5190 main.go:141] libmachine: Decoding PEM data...
	I0826 04:24:52.019366    5190 main.go:141] libmachine: Parsing certificate...
	I0826 04:24:52.019426    5190 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/cert.pem
	I0826 04:24:52.019469    5190 main.go:141] libmachine: Decoding PEM data...
	I0826 04:24:52.019482    5190 main.go:141] libmachine: Parsing certificate...
	I0826 04:24:52.020099    5190 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19501-1045/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0826 04:24:52.191968    5190 main.go:141] libmachine: Creating SSH key...
	I0826 04:24:52.270055    5190 main.go:141] libmachine: Creating Disk image...
	I0826 04:24:52.270061    5190 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0826 04:24:52.270236    5190 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/flannel-336000/disk.qcow2.raw /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/flannel-336000/disk.qcow2
	I0826 04:24:52.279247    5190 main.go:141] libmachine: STDOUT: 
	I0826 04:24:52.279264    5190 main.go:141] libmachine: STDERR: 
	I0826 04:24:52.279304    5190 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/flannel-336000/disk.qcow2 +20000M
	I0826 04:24:52.287137    5190 main.go:141] libmachine: STDOUT: Image resized.
	
	I0826 04:24:52.287151    5190 main.go:141] libmachine: STDERR: 
	I0826 04:24:52.287174    5190 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/flannel-336000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/flannel-336000/disk.qcow2
	I0826 04:24:52.287179    5190 main.go:141] libmachine: Starting QEMU VM...
	I0826 04:24:52.287187    5190 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:24:52.287211    5190 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/flannel-336000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/flannel-336000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/flannel-336000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:ac:84:a2:9d:43 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/flannel-336000/disk.qcow2
	I0826 04:24:52.288829    5190 main.go:141] libmachine: STDOUT: 
	I0826 04:24:52.288844    5190 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:24:52.288854    5190 client.go:171] duration metric: took 269.66425ms to LocalClient.Create
	I0826 04:24:54.290990    5190 start.go:128] duration metric: took 2.331927833s to createHost
	I0826 04:24:54.291058    5190 start.go:83] releasing machines lock for "flannel-336000", held for 2.33252125s
	W0826 04:24:54.291554    5190 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-336000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-336000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:24:54.306200    5190 out.go:201] 
	W0826 04:24:54.310326    5190 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0826 04:24:54.310352    5190 out.go:270] * 
	* 
	W0826 04:24:54.313271    5190 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0826 04:24:54.323148    5190 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-336000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-336000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (10.194472s)

                                                
                                                
-- stdout --
	* [enable-default-cni-336000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-336000" primary control-plane node in "enable-default-cni-336000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-336000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:24:56.696629    5307 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:24:56.696758    5307 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:24:56.696761    5307 out.go:358] Setting ErrFile to fd 2...
	I0826 04:24:56.696763    5307 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:24:56.696875    5307 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:24:56.697925    5307 out.go:352] Setting JSON to false
	I0826 04:24:56.713694    5307 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3259,"bootTime":1724668237,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0826 04:24:56.713812    5307 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0826 04:24:56.719639    5307 out.go:177] * [enable-default-cni-336000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0826 04:24:56.728368    5307 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 04:24:56.728419    5307 notify.go:220] Checking for updates...
	I0826 04:24:56.737248    5307 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	I0826 04:24:56.740363    5307 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0826 04:24:56.743351    5307 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 04:24:56.746285    5307 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	I0826 04:24:56.749348    5307 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 04:24:56.752660    5307 config.go:182] Loaded profile config "cert-expiration-652000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:24:56.752737    5307 config.go:182] Loaded profile config "multinode-143000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:24:56.752790    5307 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 04:24:56.756331    5307 out.go:177] * Using the qemu2 driver based on user configuration
	I0826 04:24:56.763335    5307 start.go:297] selected driver: qemu2
	I0826 04:24:56.763344    5307 start.go:901] validating driver "qemu2" against <nil>
	I0826 04:24:56.763350    5307 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 04:24:56.765528    5307 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0826 04:24:56.768381    5307 out.go:177] * Automatically selected the socket_vmnet network
	E0826 04:24:56.771387    5307 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0826 04:24:56.771401    5307 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 04:24:56.771417    5307 cni.go:84] Creating CNI manager for "bridge"
	I0826 04:24:56.771421    5307 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0826 04:24:56.771457    5307 start.go:340] cluster config:
	{Name:enable-default-cni-336000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 04:24:56.774864    5307 iso.go:125] acquiring lock: {Name:mk859bee1c7de58c8a10e75b01bd87b0e1e74bdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:24:56.783349    5307 out.go:177] * Starting "enable-default-cni-336000" primary control-plane node in "enable-default-cni-336000" cluster
	I0826 04:24:56.787372    5307 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0826 04:24:56.787387    5307 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0826 04:24:56.787396    5307 cache.go:56] Caching tarball of preloaded images
	I0826 04:24:56.787451    5307 preload.go:172] Found /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0826 04:24:56.787456    5307 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0826 04:24:56.787520    5307 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/enable-default-cni-336000/config.json ...
	I0826 04:24:56.787531    5307 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/enable-default-cni-336000/config.json: {Name:mkedb0d1f3e6fbab3d00369f05a56f051142b9cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 04:24:56.787748    5307 start.go:360] acquireMachinesLock for enable-default-cni-336000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:24:56.787784    5307 start.go:364] duration metric: took 27.291µs to acquireMachinesLock for "enable-default-cni-336000"
	I0826 04:24:56.787797    5307 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-336000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0826 04:24:56.787829    5307 start.go:125] createHost starting for "" (driver="qemu2")
	I0826 04:24:56.796322    5307 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0826 04:24:56.813726    5307 start.go:159] libmachine.API.Create for "enable-default-cni-336000" (driver="qemu2")
	I0826 04:24:56.813756    5307 client.go:168] LocalClient.Create starting
	I0826 04:24:56.813817    5307 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem
	I0826 04:24:56.813847    5307 main.go:141] libmachine: Decoding PEM data...
	I0826 04:24:56.813859    5307 main.go:141] libmachine: Parsing certificate...
	I0826 04:24:56.813891    5307 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/cert.pem
	I0826 04:24:56.813921    5307 main.go:141] libmachine: Decoding PEM data...
	I0826 04:24:56.813928    5307 main.go:141] libmachine: Parsing certificate...
	I0826 04:24:56.814287    5307 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19501-1045/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0826 04:24:56.977533    5307 main.go:141] libmachine: Creating SSH key...
	I0826 04:24:57.199824    5307 main.go:141] libmachine: Creating Disk image...
	I0826 04:24:57.199836    5307 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0826 04:24:57.200033    5307 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/enable-default-cni-336000/disk.qcow2.raw /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/enable-default-cni-336000/disk.qcow2
	I0826 04:24:57.209620    5307 main.go:141] libmachine: STDOUT: 
	I0826 04:24:57.209643    5307 main.go:141] libmachine: STDERR: 
	I0826 04:24:57.209693    5307 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/enable-default-cni-336000/disk.qcow2 +20000M
	I0826 04:24:57.217593    5307 main.go:141] libmachine: STDOUT: Image resized.
	
	I0826 04:24:57.217607    5307 main.go:141] libmachine: STDERR: 
	I0826 04:24:57.217632    5307 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/enable-default-cni-336000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/enable-default-cni-336000/disk.qcow2
	I0826 04:24:57.217638    5307 main.go:141] libmachine: Starting QEMU VM...
	I0826 04:24:57.217657    5307 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:24:57.217687    5307 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/enable-default-cni-336000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/enable-default-cni-336000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/enable-default-cni-336000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:c2:09:4a:f7:03 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/enable-default-cni-336000/disk.qcow2
	I0826 04:24:57.219267    5307 main.go:141] libmachine: STDOUT: 
	I0826 04:24:57.219284    5307 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:24:57.219301    5307 client.go:171] duration metric: took 405.549917ms to LocalClient.Create
	I0826 04:24:59.221434    5307 start.go:128] duration metric: took 2.433640333s to createHost
	I0826 04:24:59.221485    5307 start.go:83] releasing machines lock for "enable-default-cni-336000", held for 2.433747s
	W0826 04:24:59.221560    5307 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:24:59.233898    5307 out.go:177] * Deleting "enable-default-cni-336000" in qemu2 ...
	W0826 04:24:59.266174    5307 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:24:59.266202    5307 start.go:729] Will try again in 5 seconds ...
	I0826 04:25:04.268251    5307 start.go:360] acquireMachinesLock for enable-default-cni-336000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:25:04.268700    5307 start.go:364] duration metric: took 330.167µs to acquireMachinesLock for "enable-default-cni-336000"
	I0826 04:25:04.268823    5307 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-336000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0826 04:25:04.269076    5307 start.go:125] createHost starting for "" (driver="qemu2")
	I0826 04:25:04.287795    5307 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0826 04:25:04.339727    5307 start.go:159] libmachine.API.Create for "enable-default-cni-336000" (driver="qemu2")
	I0826 04:25:04.339791    5307 client.go:168] LocalClient.Create starting
	I0826 04:25:04.339906    5307 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem
	I0826 04:25:04.339963    5307 main.go:141] libmachine: Decoding PEM data...
	I0826 04:25:04.339977    5307 main.go:141] libmachine: Parsing certificate...
	I0826 04:25:04.340042    5307 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/cert.pem
	I0826 04:25:04.340088    5307 main.go:141] libmachine: Decoding PEM data...
	I0826 04:25:04.340100    5307 main.go:141] libmachine: Parsing certificate...
	I0826 04:25:04.340710    5307 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19501-1045/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0826 04:25:04.514606    5307 main.go:141] libmachine: Creating SSH key...
	I0826 04:25:04.794935    5307 main.go:141] libmachine: Creating Disk image...
	I0826 04:25:04.794949    5307 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0826 04:25:04.795155    5307 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/enable-default-cni-336000/disk.qcow2.raw /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/enable-default-cni-336000/disk.qcow2
	I0826 04:25:04.804834    5307 main.go:141] libmachine: STDOUT: 
	I0826 04:25:04.804865    5307 main.go:141] libmachine: STDERR: 
	I0826 04:25:04.804933    5307 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/enable-default-cni-336000/disk.qcow2 +20000M
	I0826 04:25:04.812941    5307 main.go:141] libmachine: STDOUT: Image resized.
	
	I0826 04:25:04.812954    5307 main.go:141] libmachine: STDERR: 
	I0826 04:25:04.812966    5307 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/enable-default-cni-336000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/enable-default-cni-336000/disk.qcow2
	I0826 04:25:04.812969    5307 main.go:141] libmachine: Starting QEMU VM...
	I0826 04:25:04.812984    5307 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:25:04.813020    5307 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/enable-default-cni-336000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/enable-default-cni-336000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/enable-default-cni-336000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:ba:cb:cf:4b:2f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/enable-default-cni-336000/disk.qcow2
	I0826 04:25:04.814640    5307 main.go:141] libmachine: STDOUT: 
	I0826 04:25:04.814658    5307 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:25:04.814673    5307 client.go:171] duration metric: took 474.888333ms to LocalClient.Create
	I0826 04:25:06.816814    5307 start.go:128] duration metric: took 2.547767417s to createHost
	I0826 04:25:06.816859    5307 start.go:83] releasing machines lock for "enable-default-cni-336000", held for 2.548194s
	W0826 04:25:06.817187    5307 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-336000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-336000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:25:06.832957    5307 out.go:201] 
	W0826 04:25:06.836912    5307 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0826 04:25:06.836940    5307 out.go:270] * 
	* 
	W0826 04:25:06.839707    5307 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0826 04:25:06.848880    5307 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (10.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-336000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-336000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (10.005111625s)

                                                
                                                
-- stdout --
	* [bridge-336000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-336000" primary control-plane node in "bridge-336000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-336000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:25:09.062112    5416 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:25:09.062242    5416 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:25:09.062244    5416 out.go:358] Setting ErrFile to fd 2...
	I0826 04:25:09.062248    5416 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:25:09.062376    5416 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:25:09.063455    5416 out.go:352] Setting JSON to false
	I0826 04:25:09.079431    5416 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3272,"bootTime":1724668237,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0826 04:25:09.079498    5416 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0826 04:25:09.084676    5416 out.go:177] * [bridge-336000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0826 04:25:09.091477    5416 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 04:25:09.091524    5416 notify.go:220] Checking for updates...
	I0826 04:25:09.100346    5416 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	I0826 04:25:09.103394    5416 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0826 04:25:09.106447    5416 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 04:25:09.109424    5416 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	I0826 04:25:09.112447    5416 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 04:25:09.115836    5416 config.go:182] Loaded profile config "cert-expiration-652000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:25:09.115907    5416 config.go:182] Loaded profile config "multinode-143000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:25:09.115951    5416 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 04:25:09.120390    5416 out.go:177] * Using the qemu2 driver based on user configuration
	I0826 04:25:09.127408    5416 start.go:297] selected driver: qemu2
	I0826 04:25:09.127416    5416 start.go:901] validating driver "qemu2" against <nil>
	I0826 04:25:09.127426    5416 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 04:25:09.129678    5416 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0826 04:25:09.133423    5416 out.go:177] * Automatically selected the socket_vmnet network
	I0826 04:25:09.136494    5416 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 04:25:09.136514    5416 cni.go:84] Creating CNI manager for "bridge"
	I0826 04:25:09.136518    5416 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0826 04:25:09.136558    5416 start.go:340] cluster config:
	{Name:bridge-336000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:bridge-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 04:25:09.140178    5416 iso.go:125] acquiring lock: {Name:mk859bee1c7de58c8a10e75b01bd87b0e1e74bdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:25:09.147367    5416 out.go:177] * Starting "bridge-336000" primary control-plane node in "bridge-336000" cluster
	I0826 04:25:09.151439    5416 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0826 04:25:09.151459    5416 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0826 04:25:09.151472    5416 cache.go:56] Caching tarball of preloaded images
	I0826 04:25:09.151551    5416 preload.go:172] Found /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0826 04:25:09.151558    5416 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0826 04:25:09.151636    5416 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/bridge-336000/config.json ...
	I0826 04:25:09.151647    5416 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/bridge-336000/config.json: {Name:mk8211454a29287ff18927582f6bd85ac934d313 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 04:25:09.151970    5416 start.go:360] acquireMachinesLock for bridge-336000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:25:09.152006    5416 start.go:364] duration metric: took 30.083µs to acquireMachinesLock for "bridge-336000"
	I0826 04:25:09.152019    5416 start.go:93] Provisioning new machine with config: &{Name:bridge-336000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:bridge-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0826 04:25:09.152057    5416 start.go:125] createHost starting for "" (driver="qemu2")
	I0826 04:25:09.160471    5416 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0826 04:25:09.178428    5416 start.go:159] libmachine.API.Create for "bridge-336000" (driver="qemu2")
	I0826 04:25:09.178458    5416 client.go:168] LocalClient.Create starting
	I0826 04:25:09.178519    5416 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem
	I0826 04:25:09.178546    5416 main.go:141] libmachine: Decoding PEM data...
	I0826 04:25:09.178555    5416 main.go:141] libmachine: Parsing certificate...
	I0826 04:25:09.178592    5416 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/cert.pem
	I0826 04:25:09.178615    5416 main.go:141] libmachine: Decoding PEM data...
	I0826 04:25:09.178621    5416 main.go:141] libmachine: Parsing certificate...
	I0826 04:25:09.178997    5416 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19501-1045/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0826 04:25:09.343875    5416 main.go:141] libmachine: Creating SSH key...
	I0826 04:25:09.580318    5416 main.go:141] libmachine: Creating Disk image...
	I0826 04:25:09.580326    5416 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0826 04:25:09.580577    5416 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/bridge-336000/disk.qcow2.raw /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/bridge-336000/disk.qcow2
	I0826 04:25:09.590306    5416 main.go:141] libmachine: STDOUT: 
	I0826 04:25:09.590328    5416 main.go:141] libmachine: STDERR: 
	I0826 04:25:09.590381    5416 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/bridge-336000/disk.qcow2 +20000M
	I0826 04:25:09.598262    5416 main.go:141] libmachine: STDOUT: Image resized.
	
	I0826 04:25:09.598282    5416 main.go:141] libmachine: STDERR: 
	I0826 04:25:09.598300    5416 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/bridge-336000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/bridge-336000/disk.qcow2
	I0826 04:25:09.598305    5416 main.go:141] libmachine: Starting QEMU VM...
	I0826 04:25:09.598316    5416 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:25:09.598350    5416 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/bridge-336000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/bridge-336000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/bridge-336000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:6c:4c:7c:74:6c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/bridge-336000/disk.qcow2
	I0826 04:25:09.599933    5416 main.go:141] libmachine: STDOUT: 
	I0826 04:25:09.599950    5416 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:25:09.599967    5416 client.go:171] duration metric: took 421.514042ms to LocalClient.Create
	I0826 04:25:11.602104    5416 start.go:128] duration metric: took 2.450082542s to createHost
	I0826 04:25:11.602153    5416 start.go:83] releasing machines lock for "bridge-336000", held for 2.450193875s
	W0826 04:25:11.602234    5416 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:25:11.617036    5416 out.go:177] * Deleting "bridge-336000" in qemu2 ...
	W0826 04:25:11.648478    5416 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:25:11.648501    5416 start.go:729] Will try again in 5 seconds ...
	I0826 04:25:16.650551    5416 start.go:360] acquireMachinesLock for bridge-336000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:25:16.650995    5416 start.go:364] duration metric: took 345.167µs to acquireMachinesLock for "bridge-336000"
	I0826 04:25:16.651120    5416 start.go:93] Provisioning new machine with config: &{Name:bridge-336000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:bridge-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0826 04:25:16.651464    5416 start.go:125] createHost starting for "" (driver="qemu2")
	I0826 04:25:16.670181    5416 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0826 04:25:16.721223    5416 start.go:159] libmachine.API.Create for "bridge-336000" (driver="qemu2")
	I0826 04:25:16.721270    5416 client.go:168] LocalClient.Create starting
	I0826 04:25:16.721384    5416 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem
	I0826 04:25:16.721448    5416 main.go:141] libmachine: Decoding PEM data...
	I0826 04:25:16.721465    5416 main.go:141] libmachine: Parsing certificate...
	I0826 04:25:16.721523    5416 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/cert.pem
	I0826 04:25:16.721570    5416 main.go:141] libmachine: Decoding PEM data...
	I0826 04:25:16.721584    5416 main.go:141] libmachine: Parsing certificate...
	I0826 04:25:16.722075    5416 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19501-1045/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0826 04:25:16.894051    5416 main.go:141] libmachine: Creating SSH key...
	I0826 04:25:16.973632    5416 main.go:141] libmachine: Creating Disk image...
	I0826 04:25:16.973641    5416 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0826 04:25:16.973820    5416 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/bridge-336000/disk.qcow2.raw /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/bridge-336000/disk.qcow2
	I0826 04:25:16.982811    5416 main.go:141] libmachine: STDOUT: 
	I0826 04:25:16.982830    5416 main.go:141] libmachine: STDERR: 
	I0826 04:25:16.982880    5416 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/bridge-336000/disk.qcow2 +20000M
	I0826 04:25:16.990622    5416 main.go:141] libmachine: STDOUT: Image resized.
	
	I0826 04:25:16.990637    5416 main.go:141] libmachine: STDERR: 
	I0826 04:25:16.990653    5416 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/bridge-336000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/bridge-336000/disk.qcow2
	I0826 04:25:16.990658    5416 main.go:141] libmachine: Starting QEMU VM...
	I0826 04:25:16.990668    5416 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:25:16.990714    5416 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/bridge-336000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/bridge-336000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/bridge-336000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:36:22:77:2d:84 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/bridge-336000/disk.qcow2
	I0826 04:25:16.992318    5416 main.go:141] libmachine: STDOUT: 
	I0826 04:25:16.992335    5416 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:25:16.992346    5416 client.go:171] duration metric: took 271.076667ms to LocalClient.Create
	I0826 04:25:18.994490    5416 start.go:128] duration metric: took 2.343050333s to createHost
	I0826 04:25:18.994566    5416 start.go:83] releasing machines lock for "bridge-336000", held for 2.343600333s
	W0826 04:25:18.994880    5416 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-336000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-336000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:25:19.008594    5416 out.go:201] 
	W0826 04:25:19.012736    5416 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0826 04:25:19.012767    5416 out.go:270] * 
	* 
	W0826 04:25:19.015633    5416 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0826 04:25:19.023573    5416 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (10.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-336000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-336000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.8890805s)

                                                
                                                
-- stdout --
	* [kubenet-336000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-336000" primary control-plane node in "kubenet-336000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-336000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:25:21.202269    5527 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:25:21.202408    5527 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:25:21.202412    5527 out.go:358] Setting ErrFile to fd 2...
	I0826 04:25:21.202414    5527 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:25:21.202554    5527 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:25:21.203576    5527 out.go:352] Setting JSON to false
	I0826 04:25:21.219593    5527 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3284,"bootTime":1724668237,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0826 04:25:21.219667    5527 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0826 04:25:21.225859    5527 out.go:177] * [kubenet-336000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0826 04:25:21.233766    5527 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 04:25:21.233811    5527 notify.go:220] Checking for updates...
	I0826 04:25:21.241761    5527 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	I0826 04:25:21.243430    5527 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0826 04:25:21.246750    5527 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 04:25:21.249802    5527 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	I0826 04:25:21.252800    5527 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 04:25:21.256064    5527 config.go:182] Loaded profile config "cert-expiration-652000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:25:21.256133    5527 config.go:182] Loaded profile config "multinode-143000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:25:21.256174    5527 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 04:25:21.260743    5527 out.go:177] * Using the qemu2 driver based on user configuration
	I0826 04:25:21.267764    5527 start.go:297] selected driver: qemu2
	I0826 04:25:21.267769    5527 start.go:901] validating driver "qemu2" against <nil>
	I0826 04:25:21.267775    5527 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 04:25:21.269962    5527 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0826 04:25:21.274698    5527 out.go:177] * Automatically selected the socket_vmnet network
	I0826 04:25:21.277801    5527 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 04:25:21.277821    5527 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0826 04:25:21.277853    5527 start.go:340] cluster config:
	{Name:kubenet-336000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kubenet-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 04:25:21.281553    5527 iso.go:125] acquiring lock: {Name:mk859bee1c7de58c8a10e75b01bd87b0e1e74bdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:25:21.289787    5527 out.go:177] * Starting "kubenet-336000" primary control-plane node in "kubenet-336000" cluster
	I0826 04:25:21.293708    5527 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0826 04:25:21.293727    5527 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0826 04:25:21.293734    5527 cache.go:56] Caching tarball of preloaded images
	I0826 04:25:21.293792    5527 preload.go:172] Found /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0826 04:25:21.293798    5527 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0826 04:25:21.293856    5527 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/kubenet-336000/config.json ...
	I0826 04:25:21.293870    5527 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/kubenet-336000/config.json: {Name:mkd5fd2612c4c42a8b956c951854aa7b9ff9ed9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 04:25:21.294102    5527 start.go:360] acquireMachinesLock for kubenet-336000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:25:21.294142    5527 start.go:364] duration metric: took 30.041µs to acquireMachinesLock for "kubenet-336000"
	I0826 04:25:21.294156    5527 start.go:93] Provisioning new machine with config: &{Name:kubenet-336000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kubenet-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0826 04:25:21.294190    5527 start.go:125] createHost starting for "" (driver="qemu2")
	I0826 04:25:21.301687    5527 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0826 04:25:21.319882    5527 start.go:159] libmachine.API.Create for "kubenet-336000" (driver="qemu2")
	I0826 04:25:21.319906    5527 client.go:168] LocalClient.Create starting
	I0826 04:25:21.319978    5527 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem
	I0826 04:25:21.320010    5527 main.go:141] libmachine: Decoding PEM data...
	I0826 04:25:21.320023    5527 main.go:141] libmachine: Parsing certificate...
	I0826 04:25:21.320060    5527 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/cert.pem
	I0826 04:25:21.320085    5527 main.go:141] libmachine: Decoding PEM data...
	I0826 04:25:21.320094    5527 main.go:141] libmachine: Parsing certificate...
	I0826 04:25:21.320542    5527 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19501-1045/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0826 04:25:21.498223    5527 main.go:141] libmachine: Creating SSH key...
	I0826 04:25:21.583802    5527 main.go:141] libmachine: Creating Disk image...
	I0826 04:25:21.583807    5527 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0826 04:25:21.583984    5527 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kubenet-336000/disk.qcow2.raw /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kubenet-336000/disk.qcow2
	I0826 04:25:21.593336    5527 main.go:141] libmachine: STDOUT: 
	I0826 04:25:21.593361    5527 main.go:141] libmachine: STDERR: 
	I0826 04:25:21.593417    5527 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kubenet-336000/disk.qcow2 +20000M
	I0826 04:25:21.601192    5527 main.go:141] libmachine: STDOUT: Image resized.
	
	I0826 04:25:21.601210    5527 main.go:141] libmachine: STDERR: 
	I0826 04:25:21.601226    5527 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kubenet-336000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kubenet-336000/disk.qcow2
	I0826 04:25:21.601232    5527 main.go:141] libmachine: Starting QEMU VM...
	I0826 04:25:21.601241    5527 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:25:21.601265    5527 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kubenet-336000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kubenet-336000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kubenet-336000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:8a:39:ba:01:45 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kubenet-336000/disk.qcow2
	I0826 04:25:21.602801    5527 main.go:141] libmachine: STDOUT: 
	I0826 04:25:21.602817    5527 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:25:21.602840    5527 client.go:171] duration metric: took 282.935542ms to LocalClient.Create
	I0826 04:25:23.604963    5527 start.go:128] duration metric: took 2.310805083s to createHost
	I0826 04:25:23.605013    5527 start.go:83] releasing machines lock for "kubenet-336000", held for 2.310915041s
	W0826 04:25:23.605093    5527 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:25:23.620276    5527 out.go:177] * Deleting "kubenet-336000" in qemu2 ...
	W0826 04:25:23.653434    5527 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:25:23.653456    5527 start.go:729] Will try again in 5 seconds ...
	I0826 04:25:28.655491    5527 start.go:360] acquireMachinesLock for kubenet-336000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:25:28.655968    5527 start.go:364] duration metric: took 389.291µs to acquireMachinesLock for "kubenet-336000"
	I0826 04:25:28.656091    5527 start.go:93] Provisioning new machine with config: &{Name:kubenet-336000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kubenet-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0826 04:25:28.656376    5527 start.go:125] createHost starting for "" (driver="qemu2")
	I0826 04:25:28.675052    5527 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0826 04:25:28.724285    5527 start.go:159] libmachine.API.Create for "kubenet-336000" (driver="qemu2")
	I0826 04:25:28.724340    5527 client.go:168] LocalClient.Create starting
	I0826 04:25:28.724448    5527 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem
	I0826 04:25:28.724510    5527 main.go:141] libmachine: Decoding PEM data...
	I0826 04:25:28.724525    5527 main.go:141] libmachine: Parsing certificate...
	I0826 04:25:28.724590    5527 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/cert.pem
	I0826 04:25:28.724634    5527 main.go:141] libmachine: Decoding PEM data...
	I0826 04:25:28.724649    5527 main.go:141] libmachine: Parsing certificate...
	I0826 04:25:28.725220    5527 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19501-1045/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0826 04:25:28.898190    5527 main.go:141] libmachine: Creating SSH key...
	I0826 04:25:28.995893    5527 main.go:141] libmachine: Creating Disk image...
	I0826 04:25:28.995898    5527 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0826 04:25:28.996079    5527 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kubenet-336000/disk.qcow2.raw /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kubenet-336000/disk.qcow2
	I0826 04:25:29.005516    5527 main.go:141] libmachine: STDOUT: 
	I0826 04:25:29.005530    5527 main.go:141] libmachine: STDERR: 
	I0826 04:25:29.005581    5527 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kubenet-336000/disk.qcow2 +20000M
	I0826 04:25:29.013423    5527 main.go:141] libmachine: STDOUT: Image resized.
	
	I0826 04:25:29.013438    5527 main.go:141] libmachine: STDERR: 
	I0826 04:25:29.013451    5527 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kubenet-336000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kubenet-336000/disk.qcow2
	I0826 04:25:29.013455    5527 main.go:141] libmachine: Starting QEMU VM...
	I0826 04:25:29.013465    5527 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:25:29.013502    5527 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kubenet-336000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kubenet-336000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kubenet-336000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:c9:7a:07:a1:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/kubenet-336000/disk.qcow2
	I0826 04:25:29.015148    5527 main.go:141] libmachine: STDOUT: 
	I0826 04:25:29.015164    5527 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:25:29.015188    5527 client.go:171] duration metric: took 290.850125ms to LocalClient.Create
	I0826 04:25:31.017324    5527 start.go:128] duration metric: took 2.360973791s to createHost
	I0826 04:25:31.017366    5527 start.go:83] releasing machines lock for "kubenet-336000", held for 2.361431917s
	W0826 04:25:31.017757    5527 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-336000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-336000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:25:31.031303    5527 out.go:201] 
	W0826 04:25:31.035524    5527 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0826 04:25:31.035550    5527 out.go:270] * 
	* 
	W0826 04:25:31.038583    5527 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0826 04:25:31.051397    5527 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (10.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-173000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-173000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (10.05801325s)

                                                
                                                
-- stdout --
	* [old-k8s-version-173000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-173000" primary control-plane node in "old-k8s-version-173000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-173000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:25:33.253789    5636 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:25:33.253931    5636 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:25:33.253934    5636 out.go:358] Setting ErrFile to fd 2...
	I0826 04:25:33.253937    5636 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:25:33.254076    5636 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:25:33.255096    5636 out.go:352] Setting JSON to false
	I0826 04:25:33.271012    5636 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3296,"bootTime":1724668237,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0826 04:25:33.271087    5636 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0826 04:25:33.277434    5636 out.go:177] * [old-k8s-version-173000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0826 04:25:33.285298    5636 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 04:25:33.285338    5636 notify.go:220] Checking for updates...
	I0826 04:25:33.291276    5636 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	I0826 04:25:33.294177    5636 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0826 04:25:33.298248    5636 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 04:25:33.301241    5636 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	I0826 04:25:33.304180    5636 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 04:25:33.307591    5636 config.go:182] Loaded profile config "cert-expiration-652000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:25:33.307659    5636 config.go:182] Loaded profile config "multinode-143000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:25:33.307718    5636 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 04:25:33.312221    5636 out.go:177] * Using the qemu2 driver based on user configuration
	I0826 04:25:33.319242    5636 start.go:297] selected driver: qemu2
	I0826 04:25:33.319254    5636 start.go:901] validating driver "qemu2" against <nil>
	I0826 04:25:33.319260    5636 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 04:25:33.321572    5636 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0826 04:25:33.326282    5636 out.go:177] * Automatically selected the socket_vmnet network
	I0826 04:25:33.329296    5636 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 04:25:33.329330    5636 cni.go:84] Creating CNI manager for ""
	I0826 04:25:33.329337    5636 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0826 04:25:33.329364    5636 start.go:340] cluster config:
	{Name:old-k8s-version-173000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-173000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 04:25:33.333087    5636 iso.go:125] acquiring lock: {Name:mk859bee1c7de58c8a10e75b01bd87b0e1e74bdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:25:33.341164    5636 out.go:177] * Starting "old-k8s-version-173000" primary control-plane node in "old-k8s-version-173000" cluster
	I0826 04:25:33.345217    5636 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0826 04:25:33.345234    5636 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0826 04:25:33.345243    5636 cache.go:56] Caching tarball of preloaded images
	I0826 04:25:33.345303    5636 preload.go:172] Found /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0826 04:25:33.345310    5636 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0826 04:25:33.345371    5636 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/old-k8s-version-173000/config.json ...
	I0826 04:25:33.345382    5636 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/old-k8s-version-173000/config.json: {Name:mkb862654e9251dbd417f1dc0387e6f329cb76c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 04:25:33.345802    5636 start.go:360] acquireMachinesLock for old-k8s-version-173000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:25:33.345838    5636 start.go:364] duration metric: took 29.875µs to acquireMachinesLock for "old-k8s-version-173000"
	I0826 04:25:33.345851    5636 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-173000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-173000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0826 04:25:33.345905    5636 start.go:125] createHost starting for "" (driver="qemu2")
	I0826 04:25:33.354226    5636 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0826 04:25:33.372652    5636 start.go:159] libmachine.API.Create for "old-k8s-version-173000" (driver="qemu2")
	I0826 04:25:33.372686    5636 client.go:168] LocalClient.Create starting
	I0826 04:25:33.372755    5636 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem
	I0826 04:25:33.372785    5636 main.go:141] libmachine: Decoding PEM data...
	I0826 04:25:33.372796    5636 main.go:141] libmachine: Parsing certificate...
	I0826 04:25:33.372831    5636 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/cert.pem
	I0826 04:25:33.372855    5636 main.go:141] libmachine: Decoding PEM data...
	I0826 04:25:33.372863    5636 main.go:141] libmachine: Parsing certificate...
	I0826 04:25:33.373343    5636 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19501-1045/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0826 04:25:33.538697    5636 main.go:141] libmachine: Creating SSH key...
	I0826 04:25:33.863883    5636 main.go:141] libmachine: Creating Disk image...
	I0826 04:25:33.863893    5636 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0826 04:25:33.864119    5636 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/old-k8s-version-173000/disk.qcow2.raw /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/old-k8s-version-173000/disk.qcow2
	I0826 04:25:33.874002    5636 main.go:141] libmachine: STDOUT: 
	I0826 04:25:33.874025    5636 main.go:141] libmachine: STDERR: 
	I0826 04:25:33.874072    5636 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/old-k8s-version-173000/disk.qcow2 +20000M
	I0826 04:25:33.881948    5636 main.go:141] libmachine: STDOUT: Image resized.
	
	I0826 04:25:33.881970    5636 main.go:141] libmachine: STDERR: 
	I0826 04:25:33.881990    5636 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/old-k8s-version-173000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/old-k8s-version-173000/disk.qcow2
	I0826 04:25:33.881995    5636 main.go:141] libmachine: Starting QEMU VM...
	I0826 04:25:33.882005    5636 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:25:33.882051    5636 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/old-k8s-version-173000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/old-k8s-version-173000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/old-k8s-version-173000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:d3:a4:21:b8:ef -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/old-k8s-version-173000/disk.qcow2
	I0826 04:25:33.883694    5636 main.go:141] libmachine: STDOUT: 
	I0826 04:25:33.883712    5636 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:25:33.883729    5636 client.go:171] duration metric: took 511.049333ms to LocalClient.Create
	I0826 04:25:35.885857    5636 start.go:128] duration metric: took 2.539989416s to createHost
	I0826 04:25:35.885976    5636 start.go:83] releasing machines lock for "old-k8s-version-173000", held for 2.540118125s
	W0826 04:25:35.886060    5636 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:25:35.897430    5636 out.go:177] * Deleting "old-k8s-version-173000" in qemu2 ...
	W0826 04:25:35.929303    5636 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:25:35.929341    5636 start.go:729] Will try again in 5 seconds ...
	I0826 04:25:40.931412    5636 start.go:360] acquireMachinesLock for old-k8s-version-173000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:25:40.931878    5636 start.go:364] duration metric: took 367.084µs to acquireMachinesLock for "old-k8s-version-173000"
	I0826 04:25:40.931994    5636 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-173000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-173000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0826 04:25:40.932299    5636 start.go:125] createHost starting for "" (driver="qemu2")
	I0826 04:25:40.951824    5636 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0826 04:25:41.004343    5636 start.go:159] libmachine.API.Create for "old-k8s-version-173000" (driver="qemu2")
	I0826 04:25:41.004400    5636 client.go:168] LocalClient.Create starting
	I0826 04:25:41.004567    5636 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem
	I0826 04:25:41.004644    5636 main.go:141] libmachine: Decoding PEM data...
	I0826 04:25:41.004660    5636 main.go:141] libmachine: Parsing certificate...
	I0826 04:25:41.004729    5636 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/cert.pem
	I0826 04:25:41.004777    5636 main.go:141] libmachine: Decoding PEM data...
	I0826 04:25:41.004789    5636 main.go:141] libmachine: Parsing certificate...
	I0826 04:25:41.005282    5636 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19501-1045/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0826 04:25:41.178677    5636 main.go:141] libmachine: Creating SSH key...
	I0826 04:25:41.216241    5636 main.go:141] libmachine: Creating Disk image...
	I0826 04:25:41.216247    5636 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0826 04:25:41.216404    5636 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/old-k8s-version-173000/disk.qcow2.raw /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/old-k8s-version-173000/disk.qcow2
	I0826 04:25:41.225555    5636 main.go:141] libmachine: STDOUT: 
	I0826 04:25:41.225574    5636 main.go:141] libmachine: STDERR: 
	I0826 04:25:41.225636    5636 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/old-k8s-version-173000/disk.qcow2 +20000M
	I0826 04:25:41.233402    5636 main.go:141] libmachine: STDOUT: Image resized.
	
	I0826 04:25:41.233416    5636 main.go:141] libmachine: STDERR: 
	I0826 04:25:41.233428    5636 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/old-k8s-version-173000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/old-k8s-version-173000/disk.qcow2
	I0826 04:25:41.233432    5636 main.go:141] libmachine: Starting QEMU VM...
	I0826 04:25:41.233444    5636 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:25:41.233469    5636 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/old-k8s-version-173000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/old-k8s-version-173000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/old-k8s-version-173000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:15:d2:8a:6d:4b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/old-k8s-version-173000/disk.qcow2
	I0826 04:25:41.235057    5636 main.go:141] libmachine: STDOUT: 
	I0826 04:25:41.235076    5636 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:25:41.235088    5636 client.go:171] duration metric: took 230.688542ms to LocalClient.Create
	I0826 04:25:43.237237    5636 start.go:128] duration metric: took 2.304956291s to createHost
	I0826 04:25:43.237312    5636 start.go:83] releasing machines lock for "old-k8s-version-173000", held for 2.305464167s
	W0826 04:25:43.237741    5636 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-173000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-173000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:25:43.247502    5636 out.go:201] 
	W0826 04:25:43.257576    5636 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0826 04:25:43.257636    5636 out.go:270] * 
	* 
	W0826 04:25:43.260347    5636 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0826 04:25:43.268546    5636 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-173000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-173000 -n old-k8s-version-173000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-173000 -n old-k8s-version-173000: exit status 7 (67.605375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-173000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (10.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-173000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-173000 create -f testdata/busybox.yaml: exit status 1 (29.398833ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-173000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-173000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-173000 -n old-k8s-version-173000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-173000 -n old-k8s-version-173000: exit status 7 (30.862917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-173000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-173000 -n old-k8s-version-173000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-173000 -n old-k8s-version-173000: exit status 7 (30.293083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-173000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-173000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-173000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-173000 describe deploy/metrics-server -n kube-system: exit status 1 (26.96425ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-173000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-173000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-173000 -n old-k8s-version-173000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-173000 -n old-k8s-version-173000: exit status 7 (30.046167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-173000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-173000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-173000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.185446166s)

                                                
                                                
-- stdout --
	* [old-k8s-version-173000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-173000" primary control-plane node in "old-k8s-version-173000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-173000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-173000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:25:46.978230    5691 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:25:46.978352    5691 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:25:46.978356    5691 out.go:358] Setting ErrFile to fd 2...
	I0826 04:25:46.978359    5691 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:25:46.978481    5691 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:25:46.979584    5691 out.go:352] Setting JSON to false
	I0826 04:25:46.995440    5691 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3309,"bootTime":1724668237,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0826 04:25:46.995515    5691 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0826 04:25:47.000985    5691 out.go:177] * [old-k8s-version-173000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0826 04:25:47.008905    5691 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 04:25:47.008950    5691 notify.go:220] Checking for updates...
	I0826 04:25:47.015882    5691 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	I0826 04:25:47.018983    5691 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0826 04:25:47.021988    5691 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 04:25:47.024920    5691 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	I0826 04:25:47.027966    5691 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 04:25:47.031262    5691 config.go:182] Loaded profile config "old-k8s-version-173000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0826 04:25:47.033025    5691 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0826 04:25:47.035926    5691 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 04:25:47.040001    5691 out.go:177] * Using the qemu2 driver based on existing profile
	I0826 04:25:47.044955    5691 start.go:297] selected driver: qemu2
	I0826 04:25:47.044962    5691 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-173000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-173000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 04:25:47.045032    5691 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 04:25:47.047235    5691 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 04:25:47.047283    5691 cni.go:84] Creating CNI manager for ""
	I0826 04:25:47.047290    5691 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0826 04:25:47.047317    5691 start.go:340] cluster config:
	{Name:old-k8s-version-173000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-173000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 04:25:47.050820    5691 iso.go:125] acquiring lock: {Name:mk859bee1c7de58c8a10e75b01bd87b0e1e74bdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:25:47.057929    5691 out.go:177] * Starting "old-k8s-version-173000" primary control-plane node in "old-k8s-version-173000" cluster
	I0826 04:25:47.062977    5691 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0826 04:25:47.062992    5691 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0826 04:25:47.062999    5691 cache.go:56] Caching tarball of preloaded images
	I0826 04:25:47.063058    5691 preload.go:172] Found /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0826 04:25:47.063065    5691 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0826 04:25:47.063132    5691 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/old-k8s-version-173000/config.json ...
	I0826 04:25:47.063626    5691 start.go:360] acquireMachinesLock for old-k8s-version-173000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:25:47.063658    5691 start.go:364] duration metric: took 25.125µs to acquireMachinesLock for "old-k8s-version-173000"
	I0826 04:25:47.063667    5691 start.go:96] Skipping create...Using existing machine configuration
	I0826 04:25:47.063673    5691 fix.go:54] fixHost starting: 
	I0826 04:25:47.063792    5691 fix.go:112] recreateIfNeeded on old-k8s-version-173000: state=Stopped err=<nil>
	W0826 04:25:47.063800    5691 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 04:25:47.067910    5691 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-173000" ...
	I0826 04:25:47.073386    5691 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:25:47.073436    5691 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/old-k8s-version-173000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/old-k8s-version-173000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/old-k8s-version-173000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:15:d2:8a:6d:4b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/old-k8s-version-173000/disk.qcow2
	I0826 04:25:47.075519    5691 main.go:141] libmachine: STDOUT: 
	I0826 04:25:47.075550    5691 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:25:47.075582    5691 fix.go:56] duration metric: took 11.910083ms for fixHost
	I0826 04:25:47.075587    5691 start.go:83] releasing machines lock for "old-k8s-version-173000", held for 11.923541ms
	W0826 04:25:47.075595    5691 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0826 04:25:47.075628    5691 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:25:47.075633    5691 start.go:729] Will try again in 5 seconds ...
	I0826 04:25:52.077228    5691 start.go:360] acquireMachinesLock for old-k8s-version-173000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:25:52.077646    5691 start.go:364] duration metric: took 311.833µs to acquireMachinesLock for "old-k8s-version-173000"
	I0826 04:25:52.077770    5691 start.go:96] Skipping create...Using existing machine configuration
	I0826 04:25:52.077787    5691 fix.go:54] fixHost starting: 
	I0826 04:25:52.078482    5691 fix.go:112] recreateIfNeeded on old-k8s-version-173000: state=Stopped err=<nil>
	W0826 04:25:52.078508    5691 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 04:25:52.085940    5691 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-173000" ...
	I0826 04:25:52.089888    5691 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:25:52.090135    5691 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/old-k8s-version-173000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/old-k8s-version-173000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/old-k8s-version-173000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:15:d2:8a:6d:4b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/old-k8s-version-173000/disk.qcow2
	I0826 04:25:52.099126    5691 main.go:141] libmachine: STDOUT: 
	I0826 04:25:52.099220    5691 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:25:52.099284    5691 fix.go:56] duration metric: took 21.497958ms for fixHost
	I0826 04:25:52.099300    5691 start.go:83] releasing machines lock for "old-k8s-version-173000", held for 21.6345ms
	W0826 04:25:52.099512    5691 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-173000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-173000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:25:52.106933    5691 out.go:201] 
	W0826 04:25:52.111126    5691 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0826 04:25:52.111170    5691 out.go:270] * 
	* 
	W0826 04:25:52.114355    5691 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0826 04:25:52.121816    5691 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-173000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-173000 -n old-k8s-version-173000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-173000 -n old-k8s-version-173000: exit status 7 (70.9685ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-173000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-173000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-173000 -n old-k8s-version-173000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-173000 -n old-k8s-version-173000: exit status 7 (32.992292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-173000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-173000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-173000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-173000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.654791ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-173000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-173000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-173000 -n old-k8s-version-173000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-173000 -n old-k8s-version-173000: exit status 7 (30.58475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-173000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-173000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-173000 -n old-k8s-version-173000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-173000 -n old-k8s-version-173000: exit status 7 (29.784833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-173000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-173000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-173000 --alsologtostderr -v=1: exit status 83 (39.266417ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-173000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-173000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:25:52.400379    5710 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:25:52.400757    5710 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:25:52.400761    5710 out.go:358] Setting ErrFile to fd 2...
	I0826 04:25:52.400763    5710 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:25:52.400928    5710 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:25:52.401140    5710 out.go:352] Setting JSON to false
	I0826 04:25:52.401148    5710 mustload.go:65] Loading cluster: old-k8s-version-173000
	I0826 04:25:52.401352    5710 config.go:182] Loaded profile config "old-k8s-version-173000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0826 04:25:52.405221    5710 out.go:177] * The control-plane node old-k8s-version-173000 host is not running: state=Stopped
	I0826 04:25:52.408228    5710 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-173000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-173000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-173000 -n old-k8s-version-173000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-173000 -n old-k8s-version-173000: exit status 7 (30.26875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-173000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-173000 -n old-k8s-version-173000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-173000 -n old-k8s-version-173000: exit status 7 (30.3355ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-173000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (10.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-993000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-993000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (10.290898042s)

                                                
                                                
-- stdout --
	* [no-preload-993000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-993000" primary control-plane node in "no-preload-993000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-993000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:25:52.717310    5727 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:25:52.717447    5727 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:25:52.717450    5727 out.go:358] Setting ErrFile to fd 2...
	I0826 04:25:52.717452    5727 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:25:52.717590    5727 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:25:52.718621    5727 out.go:352] Setting JSON to false
	I0826 04:25:52.734935    5727 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3315,"bootTime":1724668237,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0826 04:25:52.735002    5727 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0826 04:25:52.740165    5727 out.go:177] * [no-preload-993000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0826 04:25:52.747202    5727 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 04:25:52.747244    5727 notify.go:220] Checking for updates...
	I0826 04:25:52.754200    5727 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	I0826 04:25:52.757139    5727 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0826 04:25:52.760190    5727 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 04:25:52.763167    5727 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	I0826 04:25:52.766114    5727 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 04:25:52.769510    5727 config.go:182] Loaded profile config "cert-expiration-652000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:25:52.769576    5727 config.go:182] Loaded profile config "multinode-143000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:25:52.769630    5727 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 04:25:52.774115    5727 out.go:177] * Using the qemu2 driver based on user configuration
	I0826 04:25:52.781224    5727 start.go:297] selected driver: qemu2
	I0826 04:25:52.781234    5727 start.go:901] validating driver "qemu2" against <nil>
	I0826 04:25:52.781241    5727 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 04:25:52.783414    5727 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0826 04:25:52.787264    5727 out.go:177] * Automatically selected the socket_vmnet network
	I0826 04:25:52.790313    5727 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 04:25:52.790360    5727 cni.go:84] Creating CNI manager for ""
	I0826 04:25:52.790373    5727 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0826 04:25:52.790381    5727 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0826 04:25:52.790411    5727 start.go:340] cluster config:
	{Name:no-preload-993000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-993000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 04:25:52.793866    5727 iso.go:125] acquiring lock: {Name:mk859bee1c7de58c8a10e75b01bd87b0e1e74bdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:25:52.801078    5727 out.go:177] * Starting "no-preload-993000" primary control-plane node in "no-preload-993000" cluster
	I0826 04:25:52.805182    5727 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0826 04:25:52.805254    5727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/no-preload-993000/config.json ...
	I0826 04:25:52.805269    5727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/no-preload-993000/config.json: {Name:mk60cdf2dd7ad45b03e46229a2d630e738131575 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 04:25:52.805264    5727 cache.go:107] acquiring lock: {Name:mkdfecd2c249d21bf4ba9a955a6cf08754c7d400 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:25:52.805269    5727 cache.go:107] acquiring lock: {Name:mke6d92debf12cbb49fb1cca7c371a52bc37b3fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:25:52.805283    5727 cache.go:107] acquiring lock: {Name:mk037bebdc8c27bff9b383a0a9261e6e4cdd54fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:25:52.805322    5727 cache.go:115] /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0826 04:25:52.805329    5727 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 66.25µs
	I0826 04:25:52.805336    5727 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0826 04:25:52.805362    5727 cache.go:107] acquiring lock: {Name:mkb2e5510babb53c2471b77bf46d2e9a7284e15a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:25:52.805399    5727 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0826 04:25:52.805430    5727 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0826 04:25:52.805261    5727 cache.go:107] acquiring lock: {Name:mk5fe3a26e9f78f5b7411def670864346ca64b6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:25:52.805455    5727 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0826 04:25:52.805484    5727 cache.go:107] acquiring lock: {Name:mkaed3ec4d0545047ccae590dab79053c2a8b1c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:25:52.805531    5727 cache.go:107] acquiring lock: {Name:mk2758024c9435e6a20b0939392669677f697580 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:25:52.805532    5727 cache.go:107] acquiring lock: {Name:mk8978f98eb25f2b27bd890d021445bf23d41e5f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:25:52.805608    5727 start.go:360] acquireMachinesLock for no-preload-993000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:25:52.805632    5727 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0826 04:25:52.805645    5727 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0826 04:25:52.805655    5727 start.go:364] duration metric: took 39.5µs to acquireMachinesLock for "no-preload-993000"
	I0826 04:25:52.805667    5727 start.go:93] Provisioning new machine with config: &{Name:no-preload-993000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:no-preload-993000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0826 04:25:52.805713    5727 start.go:125] createHost starting for "" (driver="qemu2")
	I0826 04:25:52.805736    5727 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0826 04:25:52.805848    5727 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0826 04:25:52.814151    5727 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0826 04:25:52.818197    5727 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0826 04:25:52.818978    5727 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0826 04:25:52.820931    5727 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0826 04:25:52.820974    5727 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0826 04:25:52.821296    5727 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0826 04:25:52.821370    5727 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0826 04:25:52.821453    5727 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0826 04:25:52.831694    5727 start.go:159] libmachine.API.Create for "no-preload-993000" (driver="qemu2")
	I0826 04:25:52.831710    5727 client.go:168] LocalClient.Create starting
	I0826 04:25:52.831774    5727 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem
	I0826 04:25:52.831804    5727 main.go:141] libmachine: Decoding PEM data...
	I0826 04:25:52.831812    5727 main.go:141] libmachine: Parsing certificate...
	I0826 04:25:52.831852    5727 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/cert.pem
	I0826 04:25:52.831875    5727 main.go:141] libmachine: Decoding PEM data...
	I0826 04:25:52.831885    5727 main.go:141] libmachine: Parsing certificate...
	I0826 04:25:52.832222    5727 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19501-1045/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0826 04:25:52.998642    5727 main.go:141] libmachine: Creating SSH key...
	I0826 04:25:53.185123    5727 main.go:141] libmachine: Creating Disk image...
	I0826 04:25:53.185143    5727 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0826 04:25:53.185334    5727 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/no-preload-993000/disk.qcow2.raw /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/no-preload-993000/disk.qcow2
	I0826 04:25:53.194803    5727 main.go:141] libmachine: STDOUT: 
	I0826 04:25:53.194820    5727 main.go:141] libmachine: STDERR: 
	I0826 04:25:53.194867    5727 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/no-preload-993000/disk.qcow2 +20000M
	I0826 04:25:53.203037    5727 main.go:141] libmachine: STDOUT: Image resized.
	
	I0826 04:25:53.203053    5727 main.go:141] libmachine: STDERR: 
	I0826 04:25:53.203064    5727 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/no-preload-993000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/no-preload-993000/disk.qcow2
	I0826 04:25:53.203069    5727 main.go:141] libmachine: Starting QEMU VM...
	I0826 04:25:53.203081    5727 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:25:53.203116    5727 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/no-preload-993000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/no-preload-993000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/no-preload-993000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:ad:a6:1b:b9:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/no-preload-993000/disk.qcow2
	I0826 04:25:53.204758    5727 main.go:141] libmachine: STDOUT: 
	I0826 04:25:53.204772    5727 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:25:53.204785    5727 client.go:171] duration metric: took 373.080833ms to LocalClient.Create
	I0826 04:25:53.206530    5727 cache.go:162] opening:  /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0826 04:25:53.215529    5727 cache.go:162] opening:  /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0826 04:25:53.225952    5727 cache.go:162] opening:  /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I0826 04:25:53.244904    5727 cache.go:162] opening:  /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0826 04:25:53.245676    5727 cache.go:162] opening:  /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0
	I0826 04:25:53.286564    5727 cache.go:162] opening:  /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0
	I0826 04:25:53.303774    5727 cache.go:162] opening:  /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0
	I0826 04:25:53.325197    5727 cache.go:157] /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0826 04:25:53.325209    5727 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 519.937ms
	I0826 04:25:53.325217    5727 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0826 04:25:55.205137    5727 start.go:128] duration metric: took 2.399457166s to createHost
	I0826 04:25:55.205205    5727 start.go:83] releasing machines lock for "no-preload-993000", held for 2.399594084s
	W0826 04:25:55.205263    5727 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:25:55.224630    5727 out.go:177] * Deleting "no-preload-993000" in qemu2 ...
	W0826 04:25:55.261453    5727 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:25:55.261489    5727 start.go:729] Will try again in 5 seconds ...
	I0826 04:25:55.852997    5727 cache.go:157] /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0826 04:25:55.853047    5727 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 3.047655083s
	I0826 04:25:55.853095    5727 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0826 04:25:57.028622    5727 cache.go:157] /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 exists
	I0826 04:25:57.028683    5727 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0" -> "/Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0" took 4.223244625s
	I0826 04:25:57.028716    5727 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0 -> /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 succeeded
	I0826 04:25:57.422366    5727 cache.go:157] /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 exists
	I0826 04:25:57.422432    5727 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0" -> "/Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0" took 4.617118208s
	I0826 04:25:57.422461    5727 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0 -> /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 succeeded
	I0826 04:25:57.820286    5727 cache.go:157] /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 exists
	I0826 04:25:57.820361    5727 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0" -> "/Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0" took 5.015215792s
	I0826 04:25:57.820395    5727 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0 -> /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 succeeded
	I0826 04:25:57.922581    5727 cache.go:157] /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 exists
	I0826 04:25:57.922620    5727 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0" -> "/Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0" took 5.117468875s
	I0826 04:25:57.922641    5727 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0 -> /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 succeeded
	I0826 04:26:00.262227    5727 start.go:360] acquireMachinesLock for no-preload-993000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:26:00.262744    5727 start.go:364] duration metric: took 435µs to acquireMachinesLock for "no-preload-993000"
	I0826 04:26:00.262868    5727 start.go:93] Provisioning new machine with config: &{Name:no-preload-993000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:no-preload-993000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0826 04:26:00.263142    5727 start.go:125] createHost starting for "" (driver="qemu2")
	I0826 04:26:00.274790    5727 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0826 04:26:00.325834    5727 start.go:159] libmachine.API.Create for "no-preload-993000" (driver="qemu2")
	I0826 04:26:00.325890    5727 client.go:168] LocalClient.Create starting
	I0826 04:26:00.326006    5727 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem
	I0826 04:26:00.326069    5727 main.go:141] libmachine: Decoding PEM data...
	I0826 04:26:00.326094    5727 main.go:141] libmachine: Parsing certificate...
	I0826 04:26:00.326164    5727 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/cert.pem
	I0826 04:26:00.326208    5727 main.go:141] libmachine: Decoding PEM data...
	I0826 04:26:00.326224    5727 main.go:141] libmachine: Parsing certificate...
	I0826 04:26:00.326752    5727 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19501-1045/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0826 04:26:00.521410    5727 main.go:141] libmachine: Creating SSH key...
	I0826 04:26:00.912351    5727 main.go:141] libmachine: Creating Disk image...
	I0826 04:26:00.912367    5727 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0826 04:26:00.912571    5727 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/no-preload-993000/disk.qcow2.raw /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/no-preload-993000/disk.qcow2
	I0826 04:26:00.922123    5727 main.go:141] libmachine: STDOUT: 
	I0826 04:26:00.922151    5727 main.go:141] libmachine: STDERR: 
	I0826 04:26:00.922217    5727 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/no-preload-993000/disk.qcow2 +20000M
	I0826 04:26:00.930255    5727 main.go:141] libmachine: STDOUT: Image resized.
	
	I0826 04:26:00.930321    5727 main.go:141] libmachine: STDERR: 
	I0826 04:26:00.930335    5727 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/no-preload-993000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/no-preload-993000/disk.qcow2
	I0826 04:26:00.930339    5727 main.go:141] libmachine: Starting QEMU VM...
	I0826 04:26:00.930352    5727 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:26:00.930387    5727 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/no-preload-993000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/no-preload-993000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/no-preload-993000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:e7:38:73:67:ff -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/no-preload-993000/disk.qcow2
	I0826 04:26:00.932050    5727 main.go:141] libmachine: STDOUT: 
	I0826 04:26:00.932076    5727 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:26:00.932089    5727 client.go:171] duration metric: took 606.206917ms to LocalClient.Create
	I0826 04:26:02.316641    5727 cache.go:157] /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0826 04:26:02.316714    5727 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 9.5115665s
	I0826 04:26:02.316744    5727 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0826 04:26:02.316800    5727 cache.go:87] Successfully saved all images to host disk.
	I0826 04:26:02.934281    5727 start.go:128] duration metric: took 2.671171042s to createHost
	I0826 04:26:02.934351    5727 start.go:83] releasing machines lock for "no-preload-993000", held for 2.671644041s
	W0826 04:26:02.934742    5727 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-993000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-993000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:26:02.947254    5727 out.go:201] 
	W0826 04:26:02.951373    5727 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0826 04:26:02.951400    5727 out.go:270] * 
	* 
	W0826 04:26:02.954073    5727 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0826 04:26:02.966159    5727 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-993000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-993000 -n no-preload-993000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-993000 -n no-preload-993000: exit status 7 (66.896625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-993000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (10.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-993000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-993000 create -f testdata/busybox.yaml: exit status 1 (29.255083ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-993000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-993000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-993000 -n no-preload-993000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-993000 -n no-preload-993000: exit status 7 (31.021375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-993000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-993000 -n no-preload-993000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-993000 -n no-preload-993000: exit status 7 (30.212625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-993000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-993000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-993000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-993000 describe deploy/metrics-server -n kube-system: exit status 1 (26.853208ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-993000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-993000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-993000 -n no-preload-993000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-993000 -n no-preload-993000: exit status 7 (30.822417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-993000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-993000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-993000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.185052833s)

                                                
                                                
-- stdout --
	* [no-preload-993000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-993000" primary control-plane node in "no-preload-993000" cluster
	* Restarting existing qemu2 VM for "no-preload-993000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-993000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:26:07.208979    5808 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:26:07.209153    5808 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:26:07.209157    5808 out.go:358] Setting ErrFile to fd 2...
	I0826 04:26:07.209159    5808 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:26:07.209286    5808 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:26:07.210242    5808 out.go:352] Setting JSON to false
	I0826 04:26:07.226031    5808 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3330,"bootTime":1724668237,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0826 04:26:07.226103    5808 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0826 04:26:07.231030    5808 out.go:177] * [no-preload-993000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0826 04:26:07.238234    5808 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 04:26:07.238277    5808 notify.go:220] Checking for updates...
	I0826 04:26:07.245223    5808 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	I0826 04:26:07.248240    5808 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0826 04:26:07.251208    5808 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 04:26:07.254157    5808 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	I0826 04:26:07.257319    5808 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 04:26:07.259115    5808 config.go:182] Loaded profile config "no-preload-993000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:26:07.259371    5808 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 04:26:07.263219    5808 out.go:177] * Using the qemu2 driver based on existing profile
	I0826 04:26:07.270041    5808 start.go:297] selected driver: qemu2
	I0826 04:26:07.270049    5808 start.go:901] validating driver "qemu2" against &{Name:no-preload-993000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:no-preload-993000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 04:26:07.270114    5808 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 04:26:07.272390    5808 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 04:26:07.272432    5808 cni.go:84] Creating CNI manager for ""
	I0826 04:26:07.272442    5808 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0826 04:26:07.272465    5808 start.go:340] cluster config:
	{Name:no-preload-993000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-993000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 04:26:07.275836    5808 iso.go:125] acquiring lock: {Name:mk859bee1c7de58c8a10e75b01bd87b0e1e74bdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:26:07.284164    5808 out.go:177] * Starting "no-preload-993000" primary control-plane node in "no-preload-993000" cluster
	I0826 04:26:07.288146    5808 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0826 04:26:07.288247    5808 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/no-preload-993000/config.json ...
	I0826 04:26:07.288279    5808 cache.go:107] acquiring lock: {Name:mkdfecd2c249d21bf4ba9a955a6cf08754c7d400 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:26:07.288284    5808 cache.go:107] acquiring lock: {Name:mk5fe3a26e9f78f5b7411def670864346ca64b6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:26:07.288340    5808 cache.go:107] acquiring lock: {Name:mk037bebdc8c27bff9b383a0a9261e6e4cdd54fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:26:07.288350    5808 cache.go:115] /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0826 04:26:07.288356    5808 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 81.792µs
	I0826 04:26:07.288360    5808 cache.go:115] /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 exists
	I0826 04:26:07.288366    5808 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0826 04:26:07.288369    5808 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0" -> "/Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0" took 95.375µs
	I0826 04:26:07.288375    5808 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0 -> /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 succeeded
	I0826 04:26:07.288376    5808 cache.go:107] acquiring lock: {Name:mkb2e5510babb53c2471b77bf46d2e9a7284e15a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:26:07.288381    5808 cache.go:107] acquiring lock: {Name:mk8978f98eb25f2b27bd890d021445bf23d41e5f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:26:07.288412    5808 cache.go:115] /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0826 04:26:07.288416    5808 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 40.709µs
	I0826 04:26:07.288420    5808 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0826 04:26:07.288412    5808 cache.go:115] /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0826 04:26:07.288431    5808 cache.go:107] acquiring lock: {Name:mkaed3ec4d0545047ccae590dab79053c2a8b1c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:26:07.288279    5808 cache.go:107] acquiring lock: {Name:mke6d92debf12cbb49fb1cca7c371a52bc37b3fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:26:07.288438    5808 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 120.875µs
	I0826 04:26:07.288441    5808 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0826 04:26:07.288421    5808 cache.go:115] /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0826 04:26:07.288447    5808 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 66.75µs
	I0826 04:26:07.288452    5808 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0826 04:26:07.288418    5808 cache.go:107] acquiring lock: {Name:mk2758024c9435e6a20b0939392669677f697580 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:26:07.288475    5808 cache.go:115] /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 exists
	I0826 04:26:07.288478    5808 cache.go:115] /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 exists
	I0826 04:26:07.288512    5808 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0" -> "/Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0" took 96.5µs
	I0826 04:26:07.288479    5808 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0" -> "/Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0" took 207.875µs
	I0826 04:26:07.288487    5808 cache.go:115] /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 exists
	I0826 04:26:07.288529    5808 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0 -> /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 succeeded
	I0826 04:26:07.288535    5808 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0" -> "/Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0" took 117.292µs
	I0826 04:26:07.288539    5808 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0 -> /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 succeeded
	I0826 04:26:07.288520    5808 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0 -> /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 succeeded
	I0826 04:26:07.288546    5808 cache.go:87] Successfully saved all images to host disk.
	I0826 04:26:07.288682    5808 start.go:360] acquireMachinesLock for no-preload-993000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:26:07.288711    5808 start.go:364] duration metric: took 23.333µs to acquireMachinesLock for "no-preload-993000"
	I0826 04:26:07.288719    5808 start.go:96] Skipping create...Using existing machine configuration
	I0826 04:26:07.288727    5808 fix.go:54] fixHost starting: 
	I0826 04:26:07.288844    5808 fix.go:112] recreateIfNeeded on no-preload-993000: state=Stopped err=<nil>
	W0826 04:26:07.288852    5808 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 04:26:07.296188    5808 out.go:177] * Restarting existing qemu2 VM for "no-preload-993000" ...
	I0826 04:26:07.300190    5808 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:26:07.300227    5808 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/no-preload-993000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/no-preload-993000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/no-preload-993000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:e7:38:73:67:ff -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/no-preload-993000/disk.qcow2
	I0826 04:26:07.302114    5808 main.go:141] libmachine: STDOUT: 
	I0826 04:26:07.302133    5808 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:26:07.302156    5808 fix.go:56] duration metric: took 13.431583ms for fixHost
	I0826 04:26:07.302161    5808 start.go:83] releasing machines lock for "no-preload-993000", held for 13.446167ms
	W0826 04:26:07.302168    5808 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0826 04:26:07.302195    5808 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:26:07.302201    5808 start.go:729] Will try again in 5 seconds ...
	I0826 04:26:12.304275    5808 start.go:360] acquireMachinesLock for no-preload-993000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:26:12.304673    5808 start.go:364] duration metric: took 310.75µs to acquireMachinesLock for "no-preload-993000"
	I0826 04:26:12.304789    5808 start.go:96] Skipping create...Using existing machine configuration
	I0826 04:26:12.304810    5808 fix.go:54] fixHost starting: 
	I0826 04:26:12.305558    5808 fix.go:112] recreateIfNeeded on no-preload-993000: state=Stopped err=<nil>
	W0826 04:26:12.305589    5808 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 04:26:12.311133    5808 out.go:177] * Restarting existing qemu2 VM for "no-preload-993000" ...
	I0826 04:26:12.319056    5808 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:26:12.319274    5808 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/no-preload-993000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/no-preload-993000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/no-preload-993000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:e7:38:73:67:ff -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/no-preload-993000/disk.qcow2
	I0826 04:26:12.328496    5808 main.go:141] libmachine: STDOUT: 
	I0826 04:26:12.328582    5808 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:26:12.328676    5808 fix.go:56] duration metric: took 23.869792ms for fixHost
	I0826 04:26:12.328704    5808 start.go:83] releasing machines lock for "no-preload-993000", held for 24.009833ms
	W0826 04:26:12.328958    5808 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-993000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-993000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:26:12.338095    5808 out.go:201] 
	W0826 04:26:12.341201    5808 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0826 04:26:12.341233    5808 out.go:270] * 
	* 
	W0826 04:26:12.343736    5808 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0826 04:26:12.356237    5808 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-993000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-993000 -n no-preload-993000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-993000 -n no-preload-993000: exit status 7 (69.0375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-993000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-993000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-993000 -n no-preload-993000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-993000 -n no-preload-993000: exit status 7 (33.115667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-993000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-993000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-993000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-993000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.074959ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-993000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-993000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-993000 -n no-preload-993000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-993000 -n no-preload-993000: exit status 7 (28.382125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-993000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-993000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-993000 -n no-preload-993000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-993000 -n no-preload-993000: exit status 7 (29.703625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-993000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-993000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-993000 --alsologtostderr -v=1: exit status 83 (40.956959ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-993000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-993000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:26:12.621658    5830 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:26:12.621804    5830 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:26:12.621807    5830 out.go:358] Setting ErrFile to fd 2...
	I0826 04:26:12.621810    5830 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:26:12.621961    5830 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:26:12.622168    5830 out.go:352] Setting JSON to false
	I0826 04:26:12.622176    5830 mustload.go:65] Loading cluster: no-preload-993000
	I0826 04:26:12.622358    5830 config.go:182] Loaded profile config "no-preload-993000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:26:12.626684    5830 out.go:177] * The control-plane node no-preload-993000 host is not running: state=Stopped
	I0826 04:26:12.629513    5830 out.go:177]   To start a cluster, run: "minikube start -p no-preload-993000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-993000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-993000 -n no-preload-993000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-993000 -n no-preload-993000: exit status 7 (30.385667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-993000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-993000 -n no-preload-993000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-993000 -n no-preload-993000: exit status 7 (29.126833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-993000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-434000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-434000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (10.003570958s)

                                                
                                                
-- stdout --
	* [embed-certs-434000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-434000" primary control-plane node in "embed-certs-434000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-434000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:26:12.941481    5847 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:26:12.941620    5847 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:26:12.941623    5847 out.go:358] Setting ErrFile to fd 2...
	I0826 04:26:12.941631    5847 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:26:12.941752    5847 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:26:12.942838    5847 out.go:352] Setting JSON to false
	I0826 04:26:12.958787    5847 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3335,"bootTime":1724668237,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0826 04:26:12.958875    5847 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0826 04:26:12.963675    5847 out.go:177] * [embed-certs-434000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0826 04:26:12.970514    5847 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 04:26:12.970564    5847 notify.go:220] Checking for updates...
	I0826 04:26:12.977610    5847 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	I0826 04:26:12.980498    5847 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0826 04:26:12.983574    5847 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 04:26:12.986639    5847 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	I0826 04:26:12.988192    5847 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 04:26:12.991989    5847 config.go:182] Loaded profile config "cert-expiration-652000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:26:12.992049    5847 config.go:182] Loaded profile config "multinode-143000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:26:12.992101    5847 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 04:26:12.996556    5847 out.go:177] * Using the qemu2 driver based on user configuration
	I0826 04:26:13.002715    5847 start.go:297] selected driver: qemu2
	I0826 04:26:13.002724    5847 start.go:901] validating driver "qemu2" against <nil>
	I0826 04:26:13.002731    5847 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 04:26:13.004833    5847 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0826 04:26:13.007618    5847 out.go:177] * Automatically selected the socket_vmnet network
	I0826 04:26:13.010779    5847 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 04:26:13.010805    5847 cni.go:84] Creating CNI manager for ""
	I0826 04:26:13.010814    5847 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0826 04:26:13.010819    5847 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0826 04:26:13.010852    5847 start.go:340] cluster config:
	{Name:embed-certs-434000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-434000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 04:26:13.014747    5847 iso.go:125] acquiring lock: {Name:mk859bee1c7de58c8a10e75b01bd87b0e1e74bdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:26:13.022637    5847 out.go:177] * Starting "embed-certs-434000" primary control-plane node in "embed-certs-434000" cluster
	I0826 04:26:13.026490    5847 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0826 04:26:13.026508    5847 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0826 04:26:13.026517    5847 cache.go:56] Caching tarball of preloaded images
	I0826 04:26:13.026569    5847 preload.go:172] Found /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0826 04:26:13.026575    5847 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0826 04:26:13.026642    5847 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/embed-certs-434000/config.json ...
	I0826 04:26:13.026654    5847 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/embed-certs-434000/config.json: {Name:mkb087505ddff389575b8509e61a4928d45f470b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 04:26:13.026884    5847 start.go:360] acquireMachinesLock for embed-certs-434000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:26:13.026921    5847 start.go:364] duration metric: took 30.166µs to acquireMachinesLock for "embed-certs-434000"
	I0826 04:26:13.026934    5847 start.go:93] Provisioning new machine with config: &{Name:embed-certs-434000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:embed-certs-434000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0826 04:26:13.026964    5847 start.go:125] createHost starting for "" (driver="qemu2")
	I0826 04:26:13.035562    5847 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0826 04:26:13.053696    5847 start.go:159] libmachine.API.Create for "embed-certs-434000" (driver="qemu2")
	I0826 04:26:13.053728    5847 client.go:168] LocalClient.Create starting
	I0826 04:26:13.053801    5847 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem
	I0826 04:26:13.053830    5847 main.go:141] libmachine: Decoding PEM data...
	I0826 04:26:13.053839    5847 main.go:141] libmachine: Parsing certificate...
	I0826 04:26:13.053877    5847 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/cert.pem
	I0826 04:26:13.053901    5847 main.go:141] libmachine: Decoding PEM data...
	I0826 04:26:13.053908    5847 main.go:141] libmachine: Parsing certificate...
	I0826 04:26:13.054342    5847 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19501-1045/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0826 04:26:13.216593    5847 main.go:141] libmachine: Creating SSH key...
	I0826 04:26:13.370817    5847 main.go:141] libmachine: Creating Disk image...
	I0826 04:26:13.370823    5847 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0826 04:26:13.371031    5847 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/embed-certs-434000/disk.qcow2.raw /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/embed-certs-434000/disk.qcow2
	I0826 04:26:13.380771    5847 main.go:141] libmachine: STDOUT: 
	I0826 04:26:13.380814    5847 main.go:141] libmachine: STDERR: 
	I0826 04:26:13.380856    5847 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/embed-certs-434000/disk.qcow2 +20000M
	I0826 04:26:13.388686    5847 main.go:141] libmachine: STDOUT: Image resized.
	
	I0826 04:26:13.388705    5847 main.go:141] libmachine: STDERR: 
	I0826 04:26:13.388720    5847 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/embed-certs-434000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/embed-certs-434000/disk.qcow2
	I0826 04:26:13.388725    5847 main.go:141] libmachine: Starting QEMU VM...
	I0826 04:26:13.388735    5847 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:26:13.388766    5847 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/embed-certs-434000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/embed-certs-434000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/embed-certs-434000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:12:fc:89:5a:36 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/embed-certs-434000/disk.qcow2
	I0826 04:26:13.390334    5847 main.go:141] libmachine: STDOUT: 
	I0826 04:26:13.390352    5847 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:26:13.390370    5847 client.go:171] duration metric: took 336.644959ms to LocalClient.Create
	I0826 04:26:15.392501    5847 start.go:128] duration metric: took 2.36557375s to createHost
	I0826 04:26:15.392568    5847 start.go:83] releasing machines lock for "embed-certs-434000", held for 2.365691583s
	W0826 04:26:15.392643    5847 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:26:15.406069    5847 out.go:177] * Deleting "embed-certs-434000" in qemu2 ...
	W0826 04:26:15.438696    5847 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:26:15.438725    5847 start.go:729] Will try again in 5 seconds ...
	I0826 04:26:20.440799    5847 start.go:360] acquireMachinesLock for embed-certs-434000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:26:20.441311    5847 start.go:364] duration metric: took 365.708µs to acquireMachinesLock for "embed-certs-434000"
	I0826 04:26:20.441428    5847 start.go:93] Provisioning new machine with config: &{Name:embed-certs-434000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:embed-certs-434000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0826 04:26:20.441728    5847 start.go:125] createHost starting for "" (driver="qemu2")
	I0826 04:26:20.462577    5847 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0826 04:26:20.512434    5847 start.go:159] libmachine.API.Create for "embed-certs-434000" (driver="qemu2")
	I0826 04:26:20.512475    5847 client.go:168] LocalClient.Create starting
	I0826 04:26:20.512582    5847 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem
	I0826 04:26:20.512646    5847 main.go:141] libmachine: Decoding PEM data...
	I0826 04:26:20.512662    5847 main.go:141] libmachine: Parsing certificate...
	I0826 04:26:20.512726    5847 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/cert.pem
	I0826 04:26:20.512776    5847 main.go:141] libmachine: Decoding PEM data...
	I0826 04:26:20.512790    5847 main.go:141] libmachine: Parsing certificate...
	I0826 04:26:20.513328    5847 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19501-1045/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0826 04:26:20.684649    5847 main.go:141] libmachine: Creating SSH key...
	I0826 04:26:20.848414    5847 main.go:141] libmachine: Creating Disk image...
	I0826 04:26:20.848423    5847 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0826 04:26:20.848625    5847 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/embed-certs-434000/disk.qcow2.raw /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/embed-certs-434000/disk.qcow2
	I0826 04:26:20.857918    5847 main.go:141] libmachine: STDOUT: 
	I0826 04:26:20.857939    5847 main.go:141] libmachine: STDERR: 
	I0826 04:26:20.857982    5847 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/embed-certs-434000/disk.qcow2 +20000M
	I0826 04:26:20.865845    5847 main.go:141] libmachine: STDOUT: Image resized.
	
	I0826 04:26:20.865869    5847 main.go:141] libmachine: STDERR: 
	I0826 04:26:20.865885    5847 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/embed-certs-434000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/embed-certs-434000/disk.qcow2
	I0826 04:26:20.865890    5847 main.go:141] libmachine: Starting QEMU VM...
	I0826 04:26:20.865898    5847 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:26:20.865935    5847 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/embed-certs-434000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/embed-certs-434000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/embed-certs-434000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:9e:e9:af:69:d3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/embed-certs-434000/disk.qcow2
	I0826 04:26:20.867505    5847 main.go:141] libmachine: STDOUT: 
	I0826 04:26:20.867520    5847 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:26:20.867532    5847 client.go:171] duration metric: took 355.061667ms to LocalClient.Create
	I0826 04:26:22.869661    5847 start.go:128] duration metric: took 2.427957333s to createHost
	I0826 04:26:22.869721    5847 start.go:83] releasing machines lock for "embed-certs-434000", held for 2.4284435s
	W0826 04:26:22.870118    5847 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-434000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-434000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:26:22.879868    5847 out.go:201] 
	W0826 04:26:22.890892    5847 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0826 04:26:22.890922    5847 out.go:270] * 
	* 
	W0826 04:26:22.893583    5847 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0826 04:26:22.902790    5847 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-434000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-434000 -n embed-certs-434000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-434000 -n embed-certs-434000: exit status 7 (68.873125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-434000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-434000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-434000 create -f testdata/busybox.yaml: exit status 1 (29.317584ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-434000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-434000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-434000 -n embed-certs-434000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-434000 -n embed-certs-434000: exit status 7 (30.611ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-434000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-434000 -n embed-certs-434000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-434000 -n embed-certs-434000: exit status 7 (29.316291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-434000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-434000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-434000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-434000 describe deploy/metrics-server -n kube-system: exit status 1 (27.205083ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-434000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-434000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-434000 -n embed-certs-434000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-434000 -n embed-certs-434000: exit status 7 (30.501375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-434000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-434000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-434000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.185335875s)

                                                
                                                
-- stdout --
	* [embed-certs-434000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-434000" primary control-plane node in "embed-certs-434000" cluster
	* Restarting existing qemu2 VM for "embed-certs-434000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-434000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:26:25.403056    5891 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:26:25.403191    5891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:26:25.403195    5891 out.go:358] Setting ErrFile to fd 2...
	I0826 04:26:25.403197    5891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:26:25.403314    5891 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:26:25.404265    5891 out.go:352] Setting JSON to false
	I0826 04:26:25.420179    5891 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3348,"bootTime":1724668237,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0826 04:26:25.420247    5891 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0826 04:26:25.424019    5891 out.go:177] * [embed-certs-434000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0826 04:26:25.431945    5891 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 04:26:25.431998    5891 notify.go:220] Checking for updates...
	I0826 04:26:25.439897    5891 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	I0826 04:26:25.442906    5891 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0826 04:26:25.445932    5891 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 04:26:25.448812    5891 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	I0826 04:26:25.451898    5891 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 04:26:25.455200    5891 config.go:182] Loaded profile config "embed-certs-434000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:26:25.455462    5891 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 04:26:25.458849    5891 out.go:177] * Using the qemu2 driver based on existing profile
	I0826 04:26:25.465891    5891 start.go:297] selected driver: qemu2
	I0826 04:26:25.465900    5891 start.go:901] validating driver "qemu2" against &{Name:embed-certs-434000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:embed-certs-434000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 04:26:25.465969    5891 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 04:26:25.468164    5891 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 04:26:25.468214    5891 cni.go:84] Creating CNI manager for ""
	I0826 04:26:25.468221    5891 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0826 04:26:25.468246    5891 start.go:340] cluster config:
	{Name:embed-certs-434000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-434000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 04:26:25.471684    5891 iso.go:125] acquiring lock: {Name:mk859bee1c7de58c8a10e75b01bd87b0e1e74bdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:26:25.480887    5891 out.go:177] * Starting "embed-certs-434000" primary control-plane node in "embed-certs-434000" cluster
	I0826 04:26:25.483730    5891 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0826 04:26:25.483744    5891 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0826 04:26:25.483751    5891 cache.go:56] Caching tarball of preloaded images
	I0826 04:26:25.483800    5891 preload.go:172] Found /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0826 04:26:25.483805    5891 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0826 04:26:25.483860    5891 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/embed-certs-434000/config.json ...
	I0826 04:26:25.484345    5891 start.go:360] acquireMachinesLock for embed-certs-434000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:26:25.484373    5891 start.go:364] duration metric: took 21.958µs to acquireMachinesLock for "embed-certs-434000"
	I0826 04:26:25.484381    5891 start.go:96] Skipping create...Using existing machine configuration
	I0826 04:26:25.484387    5891 fix.go:54] fixHost starting: 
	I0826 04:26:25.484511    5891 fix.go:112] recreateIfNeeded on embed-certs-434000: state=Stopped err=<nil>
	W0826 04:26:25.484519    5891 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 04:26:25.488904    5891 out.go:177] * Restarting existing qemu2 VM for "embed-certs-434000" ...
	I0826 04:26:25.496876    5891 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:26:25.496914    5891 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/embed-certs-434000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/embed-certs-434000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/embed-certs-434000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:9e:e9:af:69:d3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/embed-certs-434000/disk.qcow2
	I0826 04:26:25.498893    5891 main.go:141] libmachine: STDOUT: 
	I0826 04:26:25.498912    5891 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:26:25.498946    5891 fix.go:56] duration metric: took 14.557416ms for fixHost
	I0826 04:26:25.498951    5891 start.go:83] releasing machines lock for "embed-certs-434000", held for 14.573541ms
	W0826 04:26:25.498961    5891 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0826 04:26:25.498996    5891 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:26:25.499001    5891 start.go:729] Will try again in 5 seconds ...
	I0826 04:26:30.501065    5891 start.go:360] acquireMachinesLock for embed-certs-434000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:26:30.501644    5891 start.go:364] duration metric: took 440.75µs to acquireMachinesLock for "embed-certs-434000"
	I0826 04:26:30.501808    5891 start.go:96] Skipping create...Using existing machine configuration
	I0826 04:26:30.501830    5891 fix.go:54] fixHost starting: 
	I0826 04:26:30.502531    5891 fix.go:112] recreateIfNeeded on embed-certs-434000: state=Stopped err=<nil>
	W0826 04:26:30.502560    5891 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 04:26:30.510953    5891 out.go:177] * Restarting existing qemu2 VM for "embed-certs-434000" ...
	I0826 04:26:30.515933    5891 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:26:30.516175    5891 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/embed-certs-434000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/embed-certs-434000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/embed-certs-434000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:9e:e9:af:69:d3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/embed-certs-434000/disk.qcow2
	I0826 04:26:30.525947    5891 main.go:141] libmachine: STDOUT: 
	I0826 04:26:30.526031    5891 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:26:30.526138    5891 fix.go:56] duration metric: took 24.308208ms for fixHost
	I0826 04:26:30.526160    5891 start.go:83] releasing machines lock for "embed-certs-434000", held for 24.492042ms
	W0826 04:26:30.526341    5891 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-434000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-434000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:26:30.532530    5891 out.go:201] 
	W0826 04:26:30.537027    5891 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0826 04:26:30.537054    5891 out.go:270] * 
	* 
	W0826 04:26:30.539351    5891 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0826 04:26:30.547992    5891 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-434000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-434000 -n embed-certs-434000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-434000 -n embed-certs-434000: exit status 7 (68.979791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-434000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-434000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-434000 -n embed-certs-434000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-434000 -n embed-certs-434000: exit status 7 (33.016125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-434000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-434000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-434000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-434000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.429167ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-434000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-434000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-434000 -n embed-certs-434000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-434000 -n embed-certs-434000: exit status 7 (30.261875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-434000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-434000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-434000 -n embed-certs-434000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-434000 -n embed-certs-434000: exit status 7 (29.321375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-434000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-434000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-434000 --alsologtostderr -v=1: exit status 83 (42.329292ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-434000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-434000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:26:30.819333    5910 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:26:30.819474    5910 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:26:30.819477    5910 out.go:358] Setting ErrFile to fd 2...
	I0826 04:26:30.819479    5910 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:26:30.819611    5910 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:26:30.819837    5910 out.go:352] Setting JSON to false
	I0826 04:26:30.819843    5910 mustload.go:65] Loading cluster: embed-certs-434000
	I0826 04:26:30.820027    5910 config.go:182] Loaded profile config "embed-certs-434000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:26:30.824585    5910 out.go:177] * The control-plane node embed-certs-434000 host is not running: state=Stopped
	I0826 04:26:30.828809    5910 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-434000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-434000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-434000 -n embed-certs-434000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-434000 -n embed-certs-434000: exit status 7 (30.472833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-434000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-434000 -n embed-certs-434000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-434000 -n embed-certs-434000: exit status 7 (30.374458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-434000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-727000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-727000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (9.987197917s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-727000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-727000" primary control-plane node in "default-k8s-diff-port-727000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-727000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:26:31.278965    5941 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:26:31.279093    5941 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:26:31.279097    5941 out.go:358] Setting ErrFile to fd 2...
	I0826 04:26:31.279099    5941 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:26:31.279213    5941 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:26:31.280276    5941 out.go:352] Setting JSON to false
	I0826 04:26:31.296295    5941 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3354,"bootTime":1724668237,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0826 04:26:31.296358    5941 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0826 04:26:31.300877    5941 out.go:177] * [default-k8s-diff-port-727000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0826 04:26:31.309760    5941 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 04:26:31.309813    5941 notify.go:220] Checking for updates...
	I0826 04:26:31.317684    5941 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	I0826 04:26:31.321744    5941 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0826 04:26:31.324794    5941 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 04:26:31.328714    5941 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	I0826 04:26:31.331781    5941 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 04:26:31.335190    5941 config.go:182] Loaded profile config "cert-expiration-652000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:26:31.335260    5941 config.go:182] Loaded profile config "multinode-143000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:26:31.335307    5941 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 04:26:31.339680    5941 out.go:177] * Using the qemu2 driver based on user configuration
	I0826 04:26:31.346724    5941 start.go:297] selected driver: qemu2
	I0826 04:26:31.346733    5941 start.go:901] validating driver "qemu2" against <nil>
	I0826 04:26:31.346741    5941 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 04:26:31.348965    5941 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0826 04:26:31.351736    5941 out.go:177] * Automatically selected the socket_vmnet network
	I0826 04:26:31.354902    5941 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 04:26:31.354933    5941 cni.go:84] Creating CNI manager for ""
	I0826 04:26:31.354940    5941 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0826 04:26:31.354944    5941 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0826 04:26:31.354982    5941 start.go:340] cluster config:
	{Name:default-k8s-diff-port-727000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-727000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 04:26:31.358624    5941 iso.go:125] acquiring lock: {Name:mk859bee1c7de58c8a10e75b01bd87b0e1e74bdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:26:31.367756    5941 out.go:177] * Starting "default-k8s-diff-port-727000" primary control-plane node in "default-k8s-diff-port-727000" cluster
	I0826 04:26:31.371779    5941 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0826 04:26:31.371792    5941 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0826 04:26:31.371807    5941 cache.go:56] Caching tarball of preloaded images
	I0826 04:26:31.371854    5941 preload.go:172] Found /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0826 04:26:31.371860    5941 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0826 04:26:31.371924    5941 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/default-k8s-diff-port-727000/config.json ...
	I0826 04:26:31.371936    5941 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/default-k8s-diff-port-727000/config.json: {Name:mk6c48d23494311a580314a6b2a5ce589967042a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 04:26:31.372175    5941 start.go:360] acquireMachinesLock for default-k8s-diff-port-727000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:26:31.372213    5941 start.go:364] duration metric: took 29.375µs to acquireMachinesLock for "default-k8s-diff-port-727000"
	I0826 04:26:31.372240    5941 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-727000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-727000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0826 04:26:31.372270    5941 start.go:125] createHost starting for "" (driver="qemu2")
	I0826 04:26:31.380778    5941 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0826 04:26:31.399380    5941 start.go:159] libmachine.API.Create for "default-k8s-diff-port-727000" (driver="qemu2")
	I0826 04:26:31.399410    5941 client.go:168] LocalClient.Create starting
	I0826 04:26:31.399470    5941 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem
	I0826 04:26:31.399503    5941 main.go:141] libmachine: Decoding PEM data...
	I0826 04:26:31.399512    5941 main.go:141] libmachine: Parsing certificate...
	I0826 04:26:31.399552    5941 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/cert.pem
	I0826 04:26:31.399578    5941 main.go:141] libmachine: Decoding PEM data...
	I0826 04:26:31.399585    5941 main.go:141] libmachine: Parsing certificate...
	I0826 04:26:31.399978    5941 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19501-1045/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0826 04:26:31.563560    5941 main.go:141] libmachine: Creating SSH key...
	I0826 04:26:31.657530    5941 main.go:141] libmachine: Creating Disk image...
	I0826 04:26:31.657535    5941 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0826 04:26:31.657713    5941 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/default-k8s-diff-port-727000/disk.qcow2.raw /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/default-k8s-diff-port-727000/disk.qcow2
	I0826 04:26:31.666952    5941 main.go:141] libmachine: STDOUT: 
	I0826 04:26:31.666974    5941 main.go:141] libmachine: STDERR: 
	I0826 04:26:31.667031    5941 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/default-k8s-diff-port-727000/disk.qcow2 +20000M
	I0826 04:26:31.674915    5941 main.go:141] libmachine: STDOUT: Image resized.
	
	I0826 04:26:31.674931    5941 main.go:141] libmachine: STDERR: 
	I0826 04:26:31.674945    5941 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/default-k8s-diff-port-727000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/default-k8s-diff-port-727000/disk.qcow2
	I0826 04:26:31.674950    5941 main.go:141] libmachine: Starting QEMU VM...
	I0826 04:26:31.674961    5941 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:26:31.674997    5941 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/default-k8s-diff-port-727000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/default-k8s-diff-port-727000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/default-k8s-diff-port-727000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:ff:a0:7b:4d:76 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/default-k8s-diff-port-727000/disk.qcow2
	I0826 04:26:31.676548    5941 main.go:141] libmachine: STDOUT: 
	I0826 04:26:31.676564    5941 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:26:31.676584    5941 client.go:171] duration metric: took 277.1765ms to LocalClient.Create
	I0826 04:26:33.678719    5941 start.go:128] duration metric: took 2.306479666s to createHost
	I0826 04:26:33.678779    5941 start.go:83] releasing machines lock for "default-k8s-diff-port-727000", held for 2.30660925s
	W0826 04:26:33.678870    5941 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:26:33.692345    5941 out.go:177] * Deleting "default-k8s-diff-port-727000" in qemu2 ...
	W0826 04:26:33.724732    5941 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:26:33.724753    5941 start.go:729] Will try again in 5 seconds ...
	I0826 04:26:38.726870    5941 start.go:360] acquireMachinesLock for default-k8s-diff-port-727000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:26:38.762565    5941 start.go:364] duration metric: took 35.54525ms to acquireMachinesLock for "default-k8s-diff-port-727000"
	I0826 04:26:38.762720    5941 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-727000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-727000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0826 04:26:38.762985    5941 start.go:125] createHost starting for "" (driver="qemu2")
	I0826 04:26:38.779490    5941 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0826 04:26:38.830232    5941 start.go:159] libmachine.API.Create for "default-k8s-diff-port-727000" (driver="qemu2")
	I0826 04:26:38.830403    5941 client.go:168] LocalClient.Create starting
	I0826 04:26:38.830529    5941 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem
	I0826 04:26:38.830585    5941 main.go:141] libmachine: Decoding PEM data...
	I0826 04:26:38.830605    5941 main.go:141] libmachine: Parsing certificate...
	I0826 04:26:38.830677    5941 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/cert.pem
	I0826 04:26:38.830722    5941 main.go:141] libmachine: Decoding PEM data...
	I0826 04:26:38.830733    5941 main.go:141] libmachine: Parsing certificate...
	I0826 04:26:38.831387    5941 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19501-1045/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0826 04:26:39.018755    5941 main.go:141] libmachine: Creating SSH key...
	I0826 04:26:39.167150    5941 main.go:141] libmachine: Creating Disk image...
	I0826 04:26:39.167156    5941 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0826 04:26:39.167350    5941 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/default-k8s-diff-port-727000/disk.qcow2.raw /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/default-k8s-diff-port-727000/disk.qcow2
	I0826 04:26:39.176870    5941 main.go:141] libmachine: STDOUT: 
	I0826 04:26:39.176891    5941 main.go:141] libmachine: STDERR: 
	I0826 04:26:39.176943    5941 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/default-k8s-diff-port-727000/disk.qcow2 +20000M
	I0826 04:26:39.184826    5941 main.go:141] libmachine: STDOUT: Image resized.
	
	I0826 04:26:39.184842    5941 main.go:141] libmachine: STDERR: 
	I0826 04:26:39.184851    5941 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/default-k8s-diff-port-727000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/default-k8s-diff-port-727000/disk.qcow2
	I0826 04:26:39.184855    5941 main.go:141] libmachine: Starting QEMU VM...
	I0826 04:26:39.184870    5941 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:26:39.184899    5941 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/default-k8s-diff-port-727000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/default-k8s-diff-port-727000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/default-k8s-diff-port-727000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:b4:4d:39:67:07 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/default-k8s-diff-port-727000/disk.qcow2
	I0826 04:26:39.186482    5941 main.go:141] libmachine: STDOUT: 
	I0826 04:26:39.186501    5941 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:26:39.186512    5941 client.go:171] duration metric: took 356.112ms to LocalClient.Create
	I0826 04:26:41.188640    5941 start.go:128] duration metric: took 2.4256725s to createHost
	I0826 04:26:41.188692    5941 start.go:83] releasing machines lock for "default-k8s-diff-port-727000", held for 2.426154208s
	W0826 04:26:41.189043    5941 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-727000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-727000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:26:41.207753    5941 out.go:201] 
	W0826 04:26:41.211722    5941 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0826 04:26:41.211758    5941 out.go:270] * 
	* 
	W0826 04:26:41.214441    5941 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0826 04:26:41.225657    5941 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-727000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-727000 -n default-k8s-diff-port-727000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-727000 -n default-k8s-diff-port-727000: exit status 7 (66.641334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-727000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (10.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-584000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-584000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (10.01226275s)

                                                
                                                
-- stdout --
	* [newest-cni-584000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-584000" primary control-plane node in "newest-cni-584000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-584000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:26:36.364932    5957 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:26:36.365096    5957 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:26:36.365100    5957 out.go:358] Setting ErrFile to fd 2...
	I0826 04:26:36.365103    5957 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:26:36.365232    5957 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:26:36.366271    5957 out.go:352] Setting JSON to false
	I0826 04:26:36.382426    5957 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3359,"bootTime":1724668237,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0826 04:26:36.382489    5957 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0826 04:26:36.386777    5957 out.go:177] * [newest-cni-584000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0826 04:26:36.395818    5957 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 04:26:36.395874    5957 notify.go:220] Checking for updates...
	I0826 04:26:36.403532    5957 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	I0826 04:26:36.407726    5957 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0826 04:26:36.410748    5957 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 04:26:36.412294    5957 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	I0826 04:26:36.415738    5957 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 04:26:36.419057    5957 config.go:182] Loaded profile config "default-k8s-diff-port-727000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:26:36.419123    5957 config.go:182] Loaded profile config "multinode-143000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:26:36.419176    5957 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 04:26:36.420929    5957 out.go:177] * Using the qemu2 driver based on user configuration
	I0826 04:26:36.427685    5957 start.go:297] selected driver: qemu2
	I0826 04:26:36.427690    5957 start.go:901] validating driver "qemu2" against <nil>
	I0826 04:26:36.427695    5957 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 04:26:36.429853    5957 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0826 04:26:36.429882    5957 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0826 04:26:36.433567    5957 out.go:177] * Automatically selected the socket_vmnet network
	I0826 04:26:36.440809    5957 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0826 04:26:36.440851    5957 cni.go:84] Creating CNI manager for ""
	I0826 04:26:36.440858    5957 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0826 04:26:36.440862    5957 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0826 04:26:36.440893    5957 start.go:340] cluster config:
	{Name:newest-cni-584000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-584000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 04:26:36.444670    5957 iso.go:125] acquiring lock: {Name:mk859bee1c7de58c8a10e75b01bd87b0e1e74bdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:26:36.453741    5957 out.go:177] * Starting "newest-cni-584000" primary control-plane node in "newest-cni-584000" cluster
	I0826 04:26:36.457684    5957 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0826 04:26:36.457703    5957 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0826 04:26:36.457712    5957 cache.go:56] Caching tarball of preloaded images
	I0826 04:26:36.457782    5957 preload.go:172] Found /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0826 04:26:36.457788    5957 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0826 04:26:36.457857    5957 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/newest-cni-584000/config.json ...
	I0826 04:26:36.457870    5957 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/newest-cni-584000/config.json: {Name:mkf959c83db28a5e3cdcd5b261ae73d1b3eab2c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 04:26:36.458142    5957 start.go:360] acquireMachinesLock for newest-cni-584000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:26:36.458179    5957 start.go:364] duration metric: took 30.583µs to acquireMachinesLock for "newest-cni-584000"
	I0826 04:26:36.458191    5957 start.go:93] Provisioning new machine with config: &{Name:newest-cni-584000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:newest-cni-584000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0826 04:26:36.458221    5957 start.go:125] createHost starting for "" (driver="qemu2")
	I0826 04:26:36.466777    5957 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0826 04:26:36.485010    5957 start.go:159] libmachine.API.Create for "newest-cni-584000" (driver="qemu2")
	I0826 04:26:36.485040    5957 client.go:168] LocalClient.Create starting
	I0826 04:26:36.485101    5957 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem
	I0826 04:26:36.485130    5957 main.go:141] libmachine: Decoding PEM data...
	I0826 04:26:36.485143    5957 main.go:141] libmachine: Parsing certificate...
	I0826 04:26:36.485183    5957 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/cert.pem
	I0826 04:26:36.485210    5957 main.go:141] libmachine: Decoding PEM data...
	I0826 04:26:36.485221    5957 main.go:141] libmachine: Parsing certificate...
	I0826 04:26:36.485562    5957 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19501-1045/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0826 04:26:36.647130    5957 main.go:141] libmachine: Creating SSH key...
	I0826 04:26:36.740778    5957 main.go:141] libmachine: Creating Disk image...
	I0826 04:26:36.740783    5957 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0826 04:26:36.740944    5957 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/newest-cni-584000/disk.qcow2.raw /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/newest-cni-584000/disk.qcow2
	I0826 04:26:36.750301    5957 main.go:141] libmachine: STDOUT: 
	I0826 04:26:36.750322    5957 main.go:141] libmachine: STDERR: 
	I0826 04:26:36.750376    5957 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/newest-cni-584000/disk.qcow2 +20000M
	I0826 04:26:36.758549    5957 main.go:141] libmachine: STDOUT: Image resized.
	
	I0826 04:26:36.758562    5957 main.go:141] libmachine: STDERR: 
	I0826 04:26:36.758575    5957 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/newest-cni-584000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/newest-cni-584000/disk.qcow2
	I0826 04:26:36.758580    5957 main.go:141] libmachine: Starting QEMU VM...
	I0826 04:26:36.758593    5957 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:26:36.758620    5957 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/newest-cni-584000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/newest-cni-584000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/newest-cni-584000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:db:d2:4d:ca:a9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/newest-cni-584000/disk.qcow2
	I0826 04:26:36.760241    5957 main.go:141] libmachine: STDOUT: 
	I0826 04:26:36.760255    5957 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:26:36.760273    5957 client.go:171] duration metric: took 275.23325ms to LocalClient.Create
	I0826 04:26:38.762390    5957 start.go:128] duration metric: took 2.304202542s to createHost
	I0826 04:26:38.762448    5957 start.go:83] releasing machines lock for "newest-cni-584000", held for 2.304313209s
	W0826 04:26:38.762551    5957 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:26:38.790799    5957 out.go:177] * Deleting "newest-cni-584000" in qemu2 ...
	W0826 04:26:38.815682    5957 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:26:38.815704    5957 start.go:729] Will try again in 5 seconds ...
	I0826 04:26:43.817811    5957 start.go:360] acquireMachinesLock for newest-cni-584000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:26:43.818378    5957 start.go:364] duration metric: took 458.084µs to acquireMachinesLock for "newest-cni-584000"
	I0826 04:26:43.818489    5957 start.go:93] Provisioning new machine with config: &{Name:newest-cni-584000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:newest-cni-584000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0826 04:26:43.818818    5957 start.go:125] createHost starting for "" (driver="qemu2")
	I0826 04:26:43.824665    5957 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0826 04:26:43.874523    5957 start.go:159] libmachine.API.Create for "newest-cni-584000" (driver="qemu2")
	I0826 04:26:43.874581    5957 client.go:168] LocalClient.Create starting
	I0826 04:26:43.874690    5957 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/ca.pem
	I0826 04:26:43.874745    5957 main.go:141] libmachine: Decoding PEM data...
	I0826 04:26:43.874764    5957 main.go:141] libmachine: Parsing certificate...
	I0826 04:26:43.874836    5957 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19501-1045/.minikube/certs/cert.pem
	I0826 04:26:43.874871    5957 main.go:141] libmachine: Decoding PEM data...
	I0826 04:26:43.874882    5957 main.go:141] libmachine: Parsing certificate...
	I0826 04:26:43.875411    5957 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19501-1045/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0826 04:26:44.056170    5957 main.go:141] libmachine: Creating SSH key...
	I0826 04:26:44.275280    5957 main.go:141] libmachine: Creating Disk image...
	I0826 04:26:44.275290    5957 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0826 04:26:44.275486    5957 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/newest-cni-584000/disk.qcow2.raw /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/newest-cni-584000/disk.qcow2
	I0826 04:26:44.285552    5957 main.go:141] libmachine: STDOUT: 
	I0826 04:26:44.285576    5957 main.go:141] libmachine: STDERR: 
	I0826 04:26:44.285648    5957 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/newest-cni-584000/disk.qcow2 +20000M
	I0826 04:26:44.293820    5957 main.go:141] libmachine: STDOUT: Image resized.
	
	I0826 04:26:44.293832    5957 main.go:141] libmachine: STDERR: 
	I0826 04:26:44.293842    5957 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/newest-cni-584000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/newest-cni-584000/disk.qcow2
	I0826 04:26:44.293854    5957 main.go:141] libmachine: Starting QEMU VM...
	I0826 04:26:44.293865    5957 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:26:44.293899    5957 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/newest-cni-584000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/newest-cni-584000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/newest-cni-584000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:ca:2d:23:0a:3d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/newest-cni-584000/disk.qcow2
	I0826 04:26:44.295509    5957 main.go:141] libmachine: STDOUT: 
	I0826 04:26:44.295522    5957 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:26:44.295536    5957 client.go:171] duration metric: took 420.958167ms to LocalClient.Create
	I0826 04:26:46.297693    5957 start.go:128] duration metric: took 2.478903417s to createHost
	I0826 04:26:46.297823    5957 start.go:83] releasing machines lock for "newest-cni-584000", held for 2.479429208s
	W0826 04:26:46.298179    5957 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-584000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-584000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:26:46.309723    5957 out.go:201] 
	W0826 04:26:46.316804    5957 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0826 04:26:46.316855    5957 out.go:270] * 
	* 
	W0826 04:26:46.319861    5957 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0826 04:26:46.331729    5957 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-584000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-584000 -n newest-cni-584000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-584000 -n newest-cni-584000: exit status 7 (66.304625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-584000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (10.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-727000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-727000 create -f testdata/busybox.yaml: exit status 1 (30.112833ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-727000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-727000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-727000 -n default-k8s-diff-port-727000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-727000 -n default-k8s-diff-port-727000: exit status 7 (33.357542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-727000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-727000 -n default-k8s-diff-port-727000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-727000 -n default-k8s-diff-port-727000: exit status 7 (28.981375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-727000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-727000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-727000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-727000 describe deploy/metrics-server -n kube-system: exit status 1 (26.3205ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-727000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-727000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-727000 -n default-k8s-diff-port-727000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-727000 -n default-k8s-diff-port-727000: exit status 7 (28.605125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-727000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-727000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-727000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (6.283858041s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-727000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-727000" primary control-plane node in "default-k8s-diff-port-727000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-727000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-727000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:26:45.131212    6014 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:26:45.131333    6014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:26:45.131337    6014 out.go:358] Setting ErrFile to fd 2...
	I0826 04:26:45.131339    6014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:26:45.131452    6014 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:26:45.132457    6014 out.go:352] Setting JSON to false
	I0826 04:26:45.148615    6014 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3368,"bootTime":1724668237,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0826 04:26:45.148697    6014 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0826 04:26:45.153263    6014 out.go:177] * [default-k8s-diff-port-727000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0826 04:26:45.159184    6014 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 04:26:45.159238    6014 notify.go:220] Checking for updates...
	I0826 04:26:45.166136    6014 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	I0826 04:26:45.169124    6014 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0826 04:26:45.172170    6014 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 04:26:45.175079    6014 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	I0826 04:26:45.178163    6014 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 04:26:45.181420    6014 config.go:182] Loaded profile config "default-k8s-diff-port-727000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:26:45.181670    6014 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 04:26:45.185045    6014 out.go:177] * Using the qemu2 driver based on existing profile
	I0826 04:26:45.192169    6014 start.go:297] selected driver: qemu2
	I0826 04:26:45.192177    6014 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-727000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-727000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 04:26:45.192245    6014 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 04:26:45.194513    6014 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 04:26:45.194573    6014 cni.go:84] Creating CNI manager for ""
	I0826 04:26:45.194581    6014 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0826 04:26:45.194603    6014 start.go:340] cluster config:
	{Name:default-k8s-diff-port-727000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-727000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 04:26:45.198080    6014 iso.go:125] acquiring lock: {Name:mk859bee1c7de58c8a10e75b01bd87b0e1e74bdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:26:45.207150    6014 out.go:177] * Starting "default-k8s-diff-port-727000" primary control-plane node in "default-k8s-diff-port-727000" cluster
	I0826 04:26:45.211052    6014 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0826 04:26:45.211064    6014 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0826 04:26:45.211072    6014 cache.go:56] Caching tarball of preloaded images
	I0826 04:26:45.211129    6014 preload.go:172] Found /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0826 04:26:45.211138    6014 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0826 04:26:45.211197    6014 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/default-k8s-diff-port-727000/config.json ...
	I0826 04:26:45.211743    6014 start.go:360] acquireMachinesLock for default-k8s-diff-port-727000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:26:46.298009    6014 start.go:364] duration metric: took 1.086225042s to acquireMachinesLock for "default-k8s-diff-port-727000"
	I0826 04:26:46.298166    6014 start.go:96] Skipping create...Using existing machine configuration
	I0826 04:26:46.298204    6014 fix.go:54] fixHost starting: 
	I0826 04:26:46.298879    6014 fix.go:112] recreateIfNeeded on default-k8s-diff-port-727000: state=Stopped err=<nil>
	W0826 04:26:46.298921    6014 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 04:26:46.313518    6014 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-727000" ...
	I0826 04:26:46.320750    6014 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:26:46.320980    6014 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/default-k8s-diff-port-727000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/default-k8s-diff-port-727000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/default-k8s-diff-port-727000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:b4:4d:39:67:07 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/default-k8s-diff-port-727000/disk.qcow2
	I0826 04:26:46.331023    6014 main.go:141] libmachine: STDOUT: 
	I0826 04:26:46.331123    6014 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:26:46.331261    6014 fix.go:56] duration metric: took 33.067042ms for fixHost
	I0826 04:26:46.331283    6014 start.go:83] releasing machines lock for "default-k8s-diff-port-727000", held for 33.238833ms
	W0826 04:26:46.331323    6014 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0826 04:26:46.331581    6014 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:26:46.331601    6014 start.go:729] Will try again in 5 seconds ...
	I0826 04:26:51.333728    6014 start.go:360] acquireMachinesLock for default-k8s-diff-port-727000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:26:51.334150    6014 start.go:364] duration metric: took 318.375µs to acquireMachinesLock for "default-k8s-diff-port-727000"
	I0826 04:26:51.334303    6014 start.go:96] Skipping create...Using existing machine configuration
	I0826 04:26:51.334321    6014 fix.go:54] fixHost starting: 
	I0826 04:26:51.335055    6014 fix.go:112] recreateIfNeeded on default-k8s-diff-port-727000: state=Stopped err=<nil>
	W0826 04:26:51.335085    6014 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 04:26:51.338198    6014 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-727000" ...
	I0826 04:26:51.342743    6014 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:26:51.342952    6014 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/default-k8s-diff-port-727000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/default-k8s-diff-port-727000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/default-k8s-diff-port-727000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:b4:4d:39:67:07 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/default-k8s-diff-port-727000/disk.qcow2
	I0826 04:26:51.351962    6014 main.go:141] libmachine: STDOUT: 
	I0826 04:26:51.352047    6014 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:26:51.352134    6014 fix.go:56] duration metric: took 17.809583ms for fixHost
	I0826 04:26:51.352152    6014 start.go:83] releasing machines lock for "default-k8s-diff-port-727000", held for 17.981083ms
	W0826 04:26:51.352385    6014 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-727000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-727000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:26:51.359646    6014 out.go:201] 
	W0826 04:26:51.362716    6014 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0826 04:26:51.362748    6014 out.go:270] * 
	* 
	W0826 04:26:51.365514    6014 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0826 04:26:51.373639    6014 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-727000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-727000 -n default-k8s-diff-port-727000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-727000 -n default-k8s-diff-port-727000: exit status 7 (65.541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-727000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-584000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-584000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.179444792s)

                                                
                                                
-- stdout --
	* [newest-cni-584000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-584000" primary control-plane node in "newest-cni-584000" cluster
	* Restarting existing qemu2 VM for "newest-cni-584000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-584000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:26:48.756092    6041 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:26:48.756225    6041 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:26:48.756228    6041 out.go:358] Setting ErrFile to fd 2...
	I0826 04:26:48.756230    6041 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:26:48.756367    6041 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:26:48.757391    6041 out.go:352] Setting JSON to false
	I0826 04:26:48.773332    6041 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3371,"bootTime":1724668237,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0826 04:26:48.773408    6041 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0826 04:26:48.775744    6041 out.go:177] * [newest-cni-584000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0826 04:26:48.782763    6041 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 04:26:48.782824    6041 notify.go:220] Checking for updates...
	I0826 04:26:48.788709    6041 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	I0826 04:26:48.791732    6041 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0826 04:26:48.793290    6041 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 04:26:48.796673    6041 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	I0826 04:26:48.799734    6041 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 04:26:48.803053    6041 config.go:182] Loaded profile config "newest-cni-584000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:26:48.803323    6041 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 04:26:48.806695    6041 out.go:177] * Using the qemu2 driver based on existing profile
	I0826 04:26:48.813662    6041 start.go:297] selected driver: qemu2
	I0826 04:26:48.813669    6041 start.go:901] validating driver "qemu2" against &{Name:newest-cni-584000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:newest-cni-584000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 04:26:48.813710    6041 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 04:26:48.815858    6041 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0826 04:26:48.815900    6041 cni.go:84] Creating CNI manager for ""
	I0826 04:26:48.815907    6041 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0826 04:26:48.815933    6041 start.go:340] cluster config:
	{Name:newest-cni-584000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-584000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 04:26:48.819282    6041 iso.go:125] acquiring lock: {Name:mk859bee1c7de58c8a10e75b01bd87b0e1e74bdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 04:26:48.827747    6041 out.go:177] * Starting "newest-cni-584000" primary control-plane node in "newest-cni-584000" cluster
	I0826 04:26:48.831650    6041 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0826 04:26:48.831663    6041 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0826 04:26:48.831669    6041 cache.go:56] Caching tarball of preloaded images
	I0826 04:26:48.831715    6041 preload.go:172] Found /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0826 04:26:48.831720    6041 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0826 04:26:48.831784    6041 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/newest-cni-584000/config.json ...
	I0826 04:26:48.832330    6041 start.go:360] acquireMachinesLock for newest-cni-584000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:26:48.832356    6041 start.go:364] duration metric: took 20.875µs to acquireMachinesLock for "newest-cni-584000"
	I0826 04:26:48.832364    6041 start.go:96] Skipping create...Using existing machine configuration
	I0826 04:26:48.832370    6041 fix.go:54] fixHost starting: 
	I0826 04:26:48.832489    6041 fix.go:112] recreateIfNeeded on newest-cni-584000: state=Stopped err=<nil>
	W0826 04:26:48.832496    6041 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 04:26:48.836733    6041 out.go:177] * Restarting existing qemu2 VM for "newest-cni-584000" ...
	I0826 04:26:48.843757    6041 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:26:48.843810    6041 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/newest-cni-584000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/newest-cni-584000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/newest-cni-584000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:ca:2d:23:0a:3d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/newest-cni-584000/disk.qcow2
	I0826 04:26:48.845800    6041 main.go:141] libmachine: STDOUT: 
	I0826 04:26:48.845820    6041 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:26:48.845847    6041 fix.go:56] duration metric: took 13.478542ms for fixHost
	I0826 04:26:48.845851    6041 start.go:83] releasing machines lock for "newest-cni-584000", held for 13.49125ms
	W0826 04:26:48.845859    6041 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0826 04:26:48.845899    6041 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:26:48.845903    6041 start.go:729] Will try again in 5 seconds ...
	I0826 04:26:53.848012    6041 start.go:360] acquireMachinesLock for newest-cni-584000: {Name:mkeadacc249a86d6cd856b5a20675ee4945bb355 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 04:26:53.848555    6041 start.go:364] duration metric: took 426.292µs to acquireMachinesLock for "newest-cni-584000"
	I0826 04:26:53.848756    6041 start.go:96] Skipping create...Using existing machine configuration
	I0826 04:26:53.848778    6041 fix.go:54] fixHost starting: 
	I0826 04:26:53.849541    6041 fix.go:112] recreateIfNeeded on newest-cni-584000: state=Stopped err=<nil>
	W0826 04:26:53.849567    6041 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 04:26:53.858908    6041 out.go:177] * Restarting existing qemu2 VM for "newest-cni-584000" ...
	I0826 04:26:53.863008    6041 qemu.go:418] Using hvf for hardware acceleration
	I0826 04:26:53.863207    6041 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/newest-cni-584000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/newest-cni-584000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/newest-cni-584000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:ca:2d:23:0a:3d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19501-1045/.minikube/machines/newest-cni-584000/disk.qcow2
	I0826 04:26:53.873175    6041 main.go:141] libmachine: STDOUT: 
	I0826 04:26:53.873243    6041 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0826 04:26:53.873343    6041 fix.go:56] duration metric: took 24.569042ms for fixHost
	I0826 04:26:53.873358    6041 start.go:83] releasing machines lock for "newest-cni-584000", held for 24.751083ms
	W0826 04:26:53.873546    6041 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-584000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-584000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0826 04:26:53.881973    6041 out.go:201] 
	W0826 04:26:53.885012    6041 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0826 04:26:53.885042    6041 out.go:270] * 
	* 
	W0826 04:26:53.887811    6041 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0826 04:26:53.898908    6041 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-584000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-584000 -n newest-cni-584000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-584000 -n newest-cni-584000: exit status 7 (70.351417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-584000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-727000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-727000 -n default-k8s-diff-port-727000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-727000 -n default-k8s-diff-port-727000: exit status 7 (31.918167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-727000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-727000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-727000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-727000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.585667ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-727000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-727000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-727000 -n default-k8s-diff-port-727000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-727000 -n default-k8s-diff-port-727000: exit status 7 (29.385375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-727000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-727000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-727000 -n default-k8s-diff-port-727000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-727000 -n default-k8s-diff-port-727000: exit status 7 (29.248375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-727000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-727000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-727000 --alsologtostderr -v=1: exit status 83 (41.633459ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-727000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-727000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:26:51.639654    6060 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:26:51.639795    6060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:26:51.639798    6060 out.go:358] Setting ErrFile to fd 2...
	I0826 04:26:51.639800    6060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:26:51.639935    6060 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:26:51.640149    6060 out.go:352] Setting JSON to false
	I0826 04:26:51.640156    6060 mustload.go:65] Loading cluster: default-k8s-diff-port-727000
	I0826 04:26:51.640360    6060 config.go:182] Loaded profile config "default-k8s-diff-port-727000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:26:51.644794    6060 out.go:177] * The control-plane node default-k8s-diff-port-727000 host is not running: state=Stopped
	I0826 04:26:51.648713    6060 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-727000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-727000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-727000 -n default-k8s-diff-port-727000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-727000 -n default-k8s-diff-port-727000: exit status 7 (27.999792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-727000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-727000 -n default-k8s-diff-port-727000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-727000 -n default-k8s-diff-port-727000: exit status 7 (28.728833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-727000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-584000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-584000 -n newest-cni-584000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-584000 -n newest-cni-584000: exit status 7 (30.571958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-584000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-584000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-584000 --alsologtostderr -v=1: exit status 83 (41.854458ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-584000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-584000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 04:26:54.085085    6084 out.go:345] Setting OutFile to fd 1 ...
	I0826 04:26:54.085236    6084 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:26:54.085239    6084 out.go:358] Setting ErrFile to fd 2...
	I0826 04:26:54.085241    6084 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 04:26:54.085371    6084 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 04:26:54.085605    6084 out.go:352] Setting JSON to false
	I0826 04:26:54.085612    6084 mustload.go:65] Loading cluster: newest-cni-584000
	I0826 04:26:54.085802    6084 config.go:182] Loaded profile config "newest-cni-584000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 04:26:54.090122    6084 out.go:177] * The control-plane node newest-cni-584000 host is not running: state=Stopped
	I0826 04:26:54.094114    6084 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-584000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-584000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-584000 -n newest-cni-584000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-584000 -n newest-cni-584000: exit status 7 (29.955458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-584000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-584000 -n newest-cni-584000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-584000 -n newest-cni-584000: exit status 7 (30.258375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-584000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (156/274)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.31.0/json-events 11.61
13 TestDownloadOnly/v1.31.0/preload-exists 0
16 TestDownloadOnly/v1.31.0/kubectl 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.08
18 TestDownloadOnly/v1.31.0/DeleteAll 0.11
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.1
21 TestBinaryMirror 0.3
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 197.46
29 TestAddons/serial/Volcano 37.26
31 TestAddons/serial/GCPAuth/Namespaces 0.08
33 TestAddons/parallel/Registry 14.54
34 TestAddons/parallel/Ingress 19.03
35 TestAddons/parallel/InspektorGadget 10.25
36 TestAddons/parallel/MetricsServer 5.26
39 TestAddons/parallel/CSI 32.4
40 TestAddons/parallel/Headlamp 13.46
41 TestAddons/parallel/CloudSpanner 5.2
42 TestAddons/parallel/LocalPath 41.87
43 TestAddons/parallel/NvidiaDevicePlugin 6.18
44 TestAddons/parallel/Yakd 10.26
45 TestAddons/StoppedEnableDisable 12.4
53 TestHyperKitDriverInstallOrUpdate 11.13
56 TestErrorSpam/setup 35.14
57 TestErrorSpam/start 0.35
58 TestErrorSpam/status 0.24
59 TestErrorSpam/pause 0.68
60 TestErrorSpam/unpause 0.65
61 TestErrorSpam/stop 55.24
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 46.89
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 38.42
68 TestFunctional/serial/KubeContext 0.03
69 TestFunctional/serial/KubectlGetPods 0.05
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.59
73 TestFunctional/serial/CacheCmd/cache/add_local 1.19
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.03
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.07
77 TestFunctional/serial/CacheCmd/cache/cache_reload 0.65
78 TestFunctional/serial/CacheCmd/cache/delete 0.07
79 TestFunctional/serial/MinikubeKubectlCmd 0.74
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.02
81 TestFunctional/serial/ExtraConfig 38.55
82 TestFunctional/serial/ComponentHealth 0.05
83 TestFunctional/serial/LogsCmd 0.68
84 TestFunctional/serial/LogsFileCmd 0.66
85 TestFunctional/serial/InvalidService 3.88
87 TestFunctional/parallel/ConfigCmd 0.23
88 TestFunctional/parallel/DashboardCmd 6.96
89 TestFunctional/parallel/DryRun 0.23
90 TestFunctional/parallel/InternationalLanguage 0.12
91 TestFunctional/parallel/StatusCmd 0.23
96 TestFunctional/parallel/AddonsCmd 0.1
97 TestFunctional/parallel/PersistentVolumeClaim 24.02
99 TestFunctional/parallel/SSHCmd 0.12
100 TestFunctional/parallel/CpCmd 0.42
102 TestFunctional/parallel/FileSync 0.06
103 TestFunctional/parallel/CertSync 0.43
107 TestFunctional/parallel/NodeLabels 0.04
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.07
111 TestFunctional/parallel/License 0.25
112 TestFunctional/parallel/Version/short 0.04
113 TestFunctional/parallel/Version/components 0.15
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.07
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.08
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.08
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.07
118 TestFunctional/parallel/ImageCommands/ImageBuild 1.89
119 TestFunctional/parallel/ImageCommands/Setup 1.8
120 TestFunctional/parallel/DockerEnv/bash 0.27
121 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
122 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
123 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
124 TestFunctional/parallel/ServiceCmd/DeployApp 11.09
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.48
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.37
127 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.16
128 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.15
129 TestFunctional/parallel/ImageCommands/ImageRemove 0.15
130 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.22
131 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.19
133 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 1.56
134 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
136 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.1
137 TestFunctional/parallel/ServiceCmd/List 0.12
138 TestFunctional/parallel/ServiceCmd/JSONOutput 0.08
139 TestFunctional/parallel/ServiceCmd/HTTPS 0.09
140 TestFunctional/parallel/ServiceCmd/Format 0.09
141 TestFunctional/parallel/ServiceCmd/URL 0.09
142 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
143 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
144 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
145 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
146 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
147 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
148 TestFunctional/parallel/ProfileCmd/profile_not_create 0.13
149 TestFunctional/parallel/ProfileCmd/profile_list 0.12
150 TestFunctional/parallel/ProfileCmd/profile_json_output 0.12
151 TestFunctional/parallel/MountCmd/any-port 4.98
152 TestFunctional/parallel/MountCmd/specific-port 0.93
153 TestFunctional/parallel/MountCmd/VerifyCleanup 1.49
154 TestFunctional/delete_echo-server_images 0.03
155 TestFunctional/delete_my-image_image 0.01
156 TestFunctional/delete_minikube_cached_images 0.01
160 TestMultiControlPlane/serial/StartCluster 177.62
161 TestMultiControlPlane/serial/DeployApp 4
162 TestMultiControlPlane/serial/PingHostFromPods 0.71
163 TestMultiControlPlane/serial/AddWorkerNode 58.02
164 TestMultiControlPlane/serial/NodeLabels 0.12
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.24
166 TestMultiControlPlane/serial/CopyFile 4.27
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 78.94
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.05
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 3.76
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.2
212 TestMainNoArgs 0.03
257 TestStoppedBinaryUpgrade/Setup 1.09
259 TestStoppedBinaryUpgrade/MinikubeLogs 0.77
270 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
274 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
275 TestNoKubernetes/serial/ProfileList 0.09
276 TestNoKubernetes/serial/Stop 3.24
280 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
294 TestStartStop/group/old-k8s-version/serial/Stop 3.27
295 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
305 TestStartStop/group/no-preload/serial/Stop 3.8
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.13
316 TestStartStop/group/embed-certs/serial/Stop 2.06
317 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.13
329 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.47
330 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
332 TestStartStop/group/newest-cni/serial/DeployApp 0
333 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
334 TestStartStop/group/newest-cni/serial/Stop 2.13
335 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-004000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-004000: exit status 85 (93.39225ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-004000 | jenkins | v1.33.1 | 26 Aug 24 03:34 PDT |          |
	|         | -p download-only-004000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/26 03:34:32
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0826 03:34:32.408501    1541 out.go:345] Setting OutFile to fd 1 ...
	I0826 03:34:32.408650    1541 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 03:34:32.408654    1541 out.go:358] Setting ErrFile to fd 2...
	I0826 03:34:32.408656    1541 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 03:34:32.408784    1541 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	W0826 03:34:32.408881    1541 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19501-1045/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19501-1045/.minikube/config/config.json: no such file or directory
	I0826 03:34:32.410181    1541 out.go:352] Setting JSON to true
	I0826 03:34:32.427428    1541 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":236,"bootTime":1724668236,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0826 03:34:32.427491    1541 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0826 03:34:32.433171    1541 out.go:97] [download-only-004000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0826 03:34:32.433343    1541 notify.go:220] Checking for updates...
	W0826 03:34:32.433352    1541 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball: no such file or directory
	I0826 03:34:32.438105    1541 out.go:169] MINIKUBE_LOCATION=19501
	I0826 03:34:32.444103    1541 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	I0826 03:34:32.449163    1541 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0826 03:34:32.453058    1541 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 03:34:32.456062    1541 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	W0826 03:34:32.462049    1541 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0826 03:34:32.462249    1541 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 03:34:32.467043    1541 out.go:97] Using the qemu2 driver based on user configuration
	I0826 03:34:32.467063    1541 start.go:297] selected driver: qemu2
	I0826 03:34:32.467077    1541 start.go:901] validating driver "qemu2" against <nil>
	I0826 03:34:32.467146    1541 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0826 03:34:32.471113    1541 out.go:169] Automatically selected the socket_vmnet network
	I0826 03:34:32.476579    1541 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0826 03:34:32.476671    1541 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0826 03:34:32.476705    1541 cni.go:84] Creating CNI manager for ""
	I0826 03:34:32.476722    1541 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0826 03:34:32.476770    1541 start.go:340] cluster config:
	{Name:download-only-004000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-004000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 03:34:32.481871    1541 iso.go:125] acquiring lock: {Name:mk859bee1c7de58c8a10e75b01bd87b0e1e74bdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 03:34:32.486066    1541 out.go:97] Downloading VM boot image ...
	I0826 03:34:32.486091    1541 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso
	I0826 03:34:44.138668    1541 out.go:97] Starting "download-only-004000" primary control-plane node in "download-only-004000" cluster
	I0826 03:34:44.138686    1541 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0826 03:34:44.210563    1541 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0826 03:34:44.210569    1541 cache.go:56] Caching tarball of preloaded images
	I0826 03:34:44.210735    1541 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0826 03:34:44.214925    1541 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0826 03:34:44.214932    1541 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0826 03:34:44.308360    1541 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0826 03:34:50.059751    1541 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0826 03:34:50.059928    1541 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0826 03:34:50.755714    1541 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0826 03:34:50.755923    1541 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/download-only-004000/config.json ...
	I0826 03:34:50.755942    1541 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/download-only-004000/config.json: {Name:mkfe3aa789db55db5093ac99da8ea4bd6b2ffa89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 03:34:50.756162    1541 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0826 03:34:50.756332    1541 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0826 03:34:51.303970    1541 out.go:193] 
	W0826 03:34:51.310953    1541 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19501-1045/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108e23920 0x108e23920 0x108e23920 0x108e23920 0x108e23920 0x108e23920 0x108e23920] Decompressors:map[bz2:0x1400050e300 gz:0x1400050e308 tar:0x1400050e2a0 tar.bz2:0x1400050e2c0 tar.gz:0x1400050e2d0 tar.xz:0x1400050e2e0 tar.zst:0x1400050e2f0 tbz2:0x1400050e2c0 tgz:0x1400050e2d0 txz:0x1400050e2e0 tzst:0x1400050e2f0 xz:0x1400050e310 zip:0x1400050e320 zst:0x1400050e318] Getters:map[file:0x140004a6d00 http:0x140005d2190 https:0x140005d21e0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0826 03:34:51.310977    1541 out_reason.go:110] 
	W0826 03:34:51.319896    1541 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0826 03:34:51.322925    1541 out.go:193] 
	
	
	* The control-plane node download-only-004000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-004000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-004000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (11.61s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-578000 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-578000 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=qemu2 : (11.611747042s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (11.61s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-578000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-578000: exit status 85 (79.280833ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-004000 | jenkins | v1.33.1 | 26 Aug 24 03:34 PDT |                     |
	|         | -p download-only-004000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 26 Aug 24 03:34 PDT | 26 Aug 24 03:34 PDT |
	| delete  | -p download-only-004000        | download-only-004000 | jenkins | v1.33.1 | 26 Aug 24 03:34 PDT | 26 Aug 24 03:34 PDT |
	| start   | -o=json --download-only        | download-only-578000 | jenkins | v1.33.1 | 26 Aug 24 03:34 PDT |                     |
	|         | -p download-only-578000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/26 03:34:51
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0826 03:34:51.726909    1578 out.go:345] Setting OutFile to fd 1 ...
	I0826 03:34:51.727053    1578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 03:34:51.727056    1578 out.go:358] Setting ErrFile to fd 2...
	I0826 03:34:51.727058    1578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 03:34:51.727175    1578 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 03:34:51.728312    1578 out.go:352] Setting JSON to true
	I0826 03:34:51.745327    1578 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":255,"bootTime":1724668236,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0826 03:34:51.745389    1578 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0826 03:34:51.748844    1578 out.go:97] [download-only-578000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0826 03:34:51.748950    1578 notify.go:220] Checking for updates...
	I0826 03:34:51.752852    1578 out.go:169] MINIKUBE_LOCATION=19501
	I0826 03:34:51.756849    1578 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	I0826 03:34:51.760791    1578 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0826 03:34:51.763774    1578 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 03:34:51.766881    1578 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	W0826 03:34:51.772765    1578 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0826 03:34:51.772918    1578 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 03:34:51.775792    1578 out.go:97] Using the qemu2 driver based on user configuration
	I0826 03:34:51.775802    1578 start.go:297] selected driver: qemu2
	I0826 03:34:51.775806    1578 start.go:901] validating driver "qemu2" against <nil>
	I0826 03:34:51.775866    1578 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0826 03:34:51.778713    1578 out.go:169] Automatically selected the socket_vmnet network
	I0826 03:34:51.783812    1578 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0826 03:34:51.783923    1578 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0826 03:34:51.783943    1578 cni.go:84] Creating CNI manager for ""
	I0826 03:34:51.783952    1578 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0826 03:34:51.783957    1578 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0826 03:34:51.784006    1578 start.go:340] cluster config:
	{Name:download-only-578000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-578000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 03:34:51.787369    1578 iso.go:125] acquiring lock: {Name:mk859bee1c7de58c8a10e75b01bd87b0e1e74bdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 03:34:51.790833    1578 out.go:97] Starting "download-only-578000" primary control-plane node in "download-only-578000" cluster
	I0826 03:34:51.790840    1578 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0826 03:34:51.850359    1578 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0826 03:34:51.850389    1578 cache.go:56] Caching tarball of preloaded images
	I0826 03:34:51.850550    1578 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0826 03:34:51.855710    1578 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0826 03:34:51.855719    1578 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	I0826 03:34:51.956479    1578 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4?checksum=md5:90c22abece392b762c0b4e45be981bb4 -> /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0826 03:35:01.273883    1578 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	I0826 03:35:01.274044    1578 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19501-1045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-578000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-578000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-578000
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.3s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-632000 --alsologtostderr --binary-mirror http://127.0.0.1:49311 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-632000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-632000
--- PASS: TestBinaryMirror (0.30s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-293000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-293000: exit status 85 (61.253667ms)

                                                
                                                
-- stdout --
	* Profile "addons-293000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-293000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-293000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-293000: exit status 85 (57.371666ms)

                                                
                                                
-- stdout --
	* Profile "addons-293000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-293000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (197.46s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-293000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-darwin-arm64 start -p addons-293000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (3m17.458779791s)
--- PASS: TestAddons/Setup (197.46s)

                                                
                                    
x
+
TestAddons/serial/Volcano (37.26s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:897: volcano-scheduler stabilized in 6.734208ms
addons_test.go:905: volcano-admission stabilized in 6.797958ms
addons_test.go:913: volcano-controller stabilized in 6.839333ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-s9nxr" [e09620c1-1244-4caf-b450-097207dc688e] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.005447833s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-gxjcv" [c0a97e13-c387-4f0f-8c06-83d0e1a81fb9] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.005446125s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-khlmw" [ee0656e9-b7b4-44fa-a3e5-50374c3e4f5e] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.00896125s
addons_test.go:932: (dbg) Run:  kubectl --context addons-293000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-293000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-293000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [59dc1483-a1ba-4960-9e89-ff81da177bb4] Pending
helpers_test.go:344: "test-job-nginx-0" [59dc1483-a1ba-4960-9e89-ff81da177bb4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [59dc1483-a1ba-4960-9e89-ff81da177bb4] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.007270916s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-arm64 -p addons-293000 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-darwin-arm64 -p addons-293000 addons disable volcano --alsologtostderr -v=1: (9.986672125s)
--- PASS: TestAddons/serial/Volcano (37.26s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.08s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-293000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-293000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.08s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.54s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.252333ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-q74n7" [48dbccfb-25e7-4a8c-a279-23663818cd9b] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004766708s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-q7lf5" [657a0c14-c80f-482c-8b9d-5f78549340b2] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.011336875s
addons_test.go:342: (dbg) Run:  kubectl --context addons-293000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-293000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-293000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.245264125s)
addons_test.go:361: (dbg) Run:  out/minikube-darwin-arm64 -p addons-293000 ip
addons_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 -p addons-293000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.54s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-293000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-293000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-293000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [3b2a742f-2af8-4ebe-b979-4f4d99aefca5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [3b2a742f-2af8-4ebe-b979-4f4d99aefca5] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.00772075s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-arm64 -p addons-293000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-293000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-arm64 -p addons-293000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p addons-293000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-darwin-arm64 -p addons-293000 addons disable ingress-dns --alsologtostderr -v=1: (1.185747625s)
addons_test.go:313: (dbg) Run:  out/minikube-darwin-arm64 -p addons-293000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-arm64 -p addons-293000 addons disable ingress --alsologtostderr -v=1: (7.23873525s)
--- PASS: TestAddons/parallel/Ingress (19.03s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.25s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-859p5" [a23a0153-ddb9-4ea0-b1e2-36c4e7a99cd9] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004463958s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-293000
addons_test.go:851: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-293000: (5.247248583s)
--- PASS: TestAddons/parallel/InspektorGadget (10.25s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.318791ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-nrx4h" [f7810c69-cc14-434c-af1a-edf67d811009] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005099667s
addons_test.go:417: (dbg) Run:  kubectl --context addons-293000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-arm64 -p addons-293000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.26s)

                                                
                                    
x
+
TestAddons/parallel/CSI (32.4s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 58.2205ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-293000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-293000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-293000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-293000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-293000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-293000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [9bbbd670-3ddc-4130-9697-06c8f538ee66] Pending
helpers_test.go:344: "task-pv-pod" [9bbbd670-3ddc-4130-9697-06c8f538ee66] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [9bbbd670-3ddc-4130-9697-06c8f538ee66] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.004795959s
addons_test.go:590: (dbg) Run:  kubectl --context addons-293000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-293000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-293000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-293000 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-293000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-293000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-293000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-293000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-293000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
2024/08/26 03:39:30 [DEBUG] GET http://192.168.105.2:5000
helpers_test.go:394: (dbg) Run:  kubectl --context addons-293000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-293000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-293000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-293000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [6ac5a965-c704-4ba0-bdce-5009099cb705] Pending
helpers_test.go:344: "task-pv-pod-restore" [6ac5a965-c704-4ba0-bdce-5009099cb705] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [6ac5a965-c704-4ba0-bdce-5009099cb705] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.005790125s
addons_test.go:632: (dbg) Run:  kubectl --context addons-293000 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-293000 delete pod task-pv-pod-restore: (1.069870791s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-293000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-293000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-arm64 -p addons-293000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-arm64 -p addons-293000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.145544375s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-arm64 -p addons-293000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (32.40s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.46s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-293000 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-vncjz" [dce0ae36-2d2a-420d-b6d9-eb06154fa665] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-vncjz" [dce0ae36-2d2a-420d-b6d9-eb06154fa665] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.010644959s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-arm64 -p addons-293000 addons disable headlamp --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Headlamp (13.46s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.2s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-58w25" [391dc899-8c50-4f8b-b8eb-4bd19674d864] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.009654875s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-293000
--- PASS: TestAddons/parallel/CloudSpanner (5.20s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (41.87s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-293000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-293000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-293000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-293000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-293000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-293000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-293000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-293000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-293000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [8d9d5025-df19-45ed-9c43-e7a9f99df30a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [8d9d5025-df19-45ed-9c43-e7a9f99df30a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [8d9d5025-df19-45ed-9c43-e7a9f99df30a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003804916s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-293000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-arm64 -p addons-293000 ssh "cat /opt/local-path-provisioner/pvc-ef8ae589-3745-425c-a64b-883ae06654d4_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-293000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-293000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 -p addons-293000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-darwin-arm64 -p addons-293000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (31.386708458s)
--- PASS: TestAddons/parallel/LocalPath (41.87s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.18s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-gztxw" [c0f14852-4d37-4908-80ff-ef4b348f05a5] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.007593125s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-293000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.18s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-qltgl" [7160e96d-d363-4f3e-ba55-f457a762443a] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.00694625s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-arm64 -p addons-293000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-arm64 -p addons-293000 addons disable yakd --alsologtostderr -v=1: (5.24917125s)
--- PASS: TestAddons/parallel/Yakd (10.26s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-293000
addons_test.go:174: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-293000: (12.207325875s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-293000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-293000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-293000
--- PASS: TestAddons/StoppedEnableDisable (12.40s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (11.13s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (11.13s)

                                                
                                    
x
+
TestErrorSpam/setup (35.14s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-970000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-970000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-970000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-970000 --driver=qemu2 : (35.138895375s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.0."
--- PASS: TestErrorSpam/setup (35.14s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-970000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-970000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-970000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-970000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-970000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-970000 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.24s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-970000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-970000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-970000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-970000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-970000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-970000 status
--- PASS: TestErrorSpam/status (0.24s)

                                                
                                    
x
+
TestErrorSpam/pause (0.68s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-970000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-970000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-970000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-970000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-970000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-970000 pause
--- PASS: TestErrorSpam/pause (0.68s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.65s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-970000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-970000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-970000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-970000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-970000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-970000 unpause
--- PASS: TestErrorSpam/unpause (0.65s)

                                                
                                    
x
+
TestErrorSpam/stop (55.24s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-970000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-970000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-970000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-970000 stop: (3.172706834s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-970000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-970000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-970000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-970000 stop: (26.03656725s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-970000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-970000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-970000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-970000 stop: (26.0313765s)
--- PASS: TestErrorSpam/stop (55.24s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19501-1045/.minikube/files/etc/test/nested/copy/1539/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (46.89s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-690000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Done: out/minikube-darwin-arm64 start -p functional-690000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (46.889376666s)
--- PASS: TestFunctional/serial/StartWithProxy (46.89s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.42s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-690000 --alsologtostderr -v=8
E0826 03:43:21.597537    1539 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/addons-293000/client.crt: no such file or directory" logger="UnhandledError"
E0826 03:43:21.606690    1539 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/addons-293000/client.crt: no such file or directory" logger="UnhandledError"
E0826 03:43:21.620127    1539 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/addons-293000/client.crt: no such file or directory" logger="UnhandledError"
E0826 03:43:21.643104    1539 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/addons-293000/client.crt: no such file or directory" logger="UnhandledError"
E0826 03:43:21.686640    1539 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/addons-293000/client.crt: no such file or directory" logger="UnhandledError"
E0826 03:43:21.770280    1539 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/addons-293000/client.crt: no such file or directory" logger="UnhandledError"
E0826 03:43:21.933820    1539 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/addons-293000/client.crt: no such file or directory" logger="UnhandledError"
E0826 03:43:22.257427    1539 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/addons-293000/client.crt: no such file or directory" logger="UnhandledError"
E0826 03:43:22.901200    1539 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/addons-293000/client.crt: no such file or directory" logger="UnhandledError"
E0826 03:43:24.184982    1539 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/addons-293000/client.crt: no such file or directory" logger="UnhandledError"
E0826 03:43:26.748466    1539 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/addons-293000/client.crt: no such file or directory" logger="UnhandledError"
E0826 03:43:31.871678    1539 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/addons-293000/client.crt: no such file or directory" logger="UnhandledError"
E0826 03:43:42.115056    1539 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/addons-293000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-darwin-arm64 start -p functional-690000 --alsologtostderr -v=8: (38.417720625s)
functional_test.go:663: soft start took 38.418194416s for "functional-690000" cluster.
--- PASS: TestFunctional/serial/SoftStart (38.42s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-690000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.59s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.59s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-690000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local206909634/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 cache add minikube-local-cache-test:functional-690000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 cache delete minikube-local-cache-test:functional-690000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-690000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-690000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (64.566417ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.74s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 kubectl -- --context functional-690000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.74s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.02s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-690000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-690000 get pods: (1.0181845s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.02s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.55s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-690000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0826 03:44:02.597534    1539 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/addons-293000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-darwin-arm64 start -p functional-690000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.546109584s)
functional_test.go:761: restart took 38.546208834s for "functional-690000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.55s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-690000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.68s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.68s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd2137472484/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.66s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.88s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-690000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-690000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-690000: exit status 115 (147.738625ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:32657 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-690000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.88s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-690000 config get cpus: exit status 14 (29.232333ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-690000 config get cpus: exit status 14 (30.781667ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-690000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-690000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2503: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.96s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-690000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-690000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (118.514042ms)

                                                
                                                
-- stdout --
	* [functional-690000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 03:45:27.249863    2486 out.go:345] Setting OutFile to fd 1 ...
	I0826 03:45:27.250018    2486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 03:45:27.250022    2486 out.go:358] Setting ErrFile to fd 2...
	I0826 03:45:27.250024    2486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 03:45:27.250160    2486 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 03:45:27.251214    2486 out.go:352] Setting JSON to false
	I0826 03:45:27.268608    2486 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":891,"bootTime":1724668236,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0826 03:45:27.268676    2486 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0826 03:45:27.274739    2486 out.go:177] * [functional-690000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0826 03:45:27.281776    2486 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 03:45:27.281804    2486 notify.go:220] Checking for updates...
	I0826 03:45:27.289764    2486 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	I0826 03:45:27.292775    2486 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0826 03:45:27.295749    2486 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 03:45:27.296884    2486 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	I0826 03:45:27.299767    2486 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 03:45:27.303054    2486 config.go:182] Loaded profile config "functional-690000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 03:45:27.303322    2486 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 03:45:27.307606    2486 out.go:177] * Using the qemu2 driver based on existing profile
	I0826 03:45:27.314789    2486 start.go:297] selected driver: qemu2
	I0826 03:45:27.314796    2486 start.go:901] validating driver "qemu2" against &{Name:functional-690000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-690000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 03:45:27.314845    2486 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 03:45:27.321779    2486 out.go:201] 
	W0826 03:45:27.325722    2486 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0826 03:45:27.329824    2486 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-690000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-690000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-690000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (116.126334ms)

                                                
                                                
-- stdout --
	* [functional-690000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 03:45:27.471261    2497 out.go:345] Setting OutFile to fd 1 ...
	I0826 03:45:27.471373    2497 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 03:45:27.471376    2497 out.go:358] Setting ErrFile to fd 2...
	I0826 03:45:27.471379    2497 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 03:45:27.471514    2497 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
	I0826 03:45:27.472998    2497 out.go:352] Setting JSON to false
	I0826 03:45:27.489990    2497 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":891,"bootTime":1724668236,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0826 03:45:27.490082    2497 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0826 03:45:27.498494    2497 out.go:177] * [functional-690000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0826 03:45:27.506874    2497 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 03:45:27.506915    2497 notify.go:220] Checking for updates...
	I0826 03:45:27.514762    2497 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	I0826 03:45:27.517778    2497 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0826 03:45:27.519036    2497 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 03:45:27.521711    2497 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	I0826 03:45:27.524754    2497 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 03:45:27.528074    2497 config.go:182] Loaded profile config "functional-690000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0826 03:45:27.528387    2497 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 03:45:27.532747    2497 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0826 03:45:27.539740    2497 start.go:297] selected driver: qemu2
	I0826 03:45:27.539747    2497 start.go:901] validating driver "qemu2" against &{Name:functional-690000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-690000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 03:45:27.539808    2497 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 03:45:27.546758    2497 out.go:201] 
	W0826 03:45:27.550815    2497 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0826 03:45:27.554704    2497 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [3fe58ecc-3b78-476f-b8c2-fdfeda4e6443] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.008354125s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-690000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-690000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-690000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-690000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [1acee30f-c132-4a82-8cc1-ebc29e9d0772] Pending
helpers_test.go:344: "sp-pod" [1acee30f-c132-4a82-8cc1-ebc29e9d0772] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [1acee30f-c132-4a82-8cc1-ebc29e9d0772] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.01230525s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-690000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-690000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-690000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b1afd47c-1064-4650-aef0-5e8e4f34e315] Pending
helpers_test.go:344: "sp-pod" [b1afd47c-1064-4650-aef0-5e8e4f34e315] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b1afd47c-1064-4650-aef0-5e8e4f34e315] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.013822542s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-690000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.02s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 ssh -n functional-690000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 cp functional-690000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd2721220720/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 ssh -n functional-690000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 ssh -n functional-690000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1539/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 ssh "sudo cat /etc/test/nested/copy/1539/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1539.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 ssh "sudo cat /etc/ssl/certs/1539.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1539.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 ssh "sudo cat /usr/share/ca-certificates/1539.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/15392.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 ssh "sudo cat /etc/ssl/certs/15392.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/15392.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 ssh "sudo cat /usr/share/ca-certificates/15392.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
E0826 03:44:43.558374    1539 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/addons-293000/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/CertSync (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-690000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-690000 ssh "sudo systemctl is-active crio": exit status 1 (67.206542ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-690000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-690000
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-690000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-690000 image ls --format short --alsologtostderr:
I0826 03:45:34.057862    2525 out.go:345] Setting OutFile to fd 1 ...
I0826 03:45:34.058037    2525 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0826 03:45:34.058042    2525 out.go:358] Setting ErrFile to fd 2...
I0826 03:45:34.058045    2525 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0826 03:45:34.058210    2525 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
I0826 03:45:34.058698    2525 config.go:182] Loaded profile config "functional-690000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0826 03:45:34.058762    2525 config.go:182] Loaded profile config "functional-690000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0826 03:45:34.059600    2525 ssh_runner.go:195] Run: systemctl --version
I0826 03:45:34.059611    2525 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/functional-690000/id_rsa Username:docker}
I0826 03:45:34.084040    2525 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-690000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/nginx                     | alpine            | 70594c812316a | 47MB   |
| registry.k8s.io/kube-apiserver              | v1.31.0           | cd0f0ae0ec9e0 | 91.5MB |
| registry.k8s.io/kube-controller-manager     | v1.31.0           | fcb0683e6bdbd | 85.9MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/kube-scheduler              | v1.31.0           | fbbbd428abb4d | 66MB   |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| docker.io/library/nginx                     | latest            | a9dfdba8b7190 | 193MB  |
| registry.k8s.io/kube-proxy                  | v1.31.0           | 71d55d66fd4ee | 94.7MB |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| docker.io/library/minikube-local-cache-test | functional-690000 | 349fbdb5881e1 | 30B    |
| registry.k8s.io/coredns/coredns             | v1.11.1           | 2437cf7621777 | 57.4MB |
| docker.io/kicbase/echo-server               | functional-690000 | ce2d2cda2d858 | 4.78MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-690000 image ls --format table --alsologtostderr:
I0826 03:45:34.636077    2536 out.go:345] Setting OutFile to fd 1 ...
I0826 03:45:34.636266    2536 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0826 03:45:34.636271    2536 out.go:358] Setting ErrFile to fd 2...
I0826 03:45:34.636274    2536 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0826 03:45:34.636408    2536 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
I0826 03:45:34.636884    2536 config.go:182] Loaded profile config "functional-690000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0826 03:45:34.636948    2536 config.go:182] Loaded profile config "functional-690000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0826 03:45:34.637795    2536 ssh_runner.go:195] Run: systemctl --version
I0826 03:45:34.637805    2536 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/functional-690000/id_rsa Username:docker}
I0826 03:45:34.666132    2536 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-690000 image ls --format json --alsologtostderr:
[{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"94700000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699
057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-690000"],"size":"4780000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver
:v1.31.0"],"size":"91500000"},{"id":"fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"66000000"},{"id":"fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"85900000"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"57400000"},{"id":"349fbdb5881e162d98e9005784c939e7e93268e52c0833ba8b52dec0f91a2172","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-690000"],"size":"30"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"s
ize":"3550000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-690000 image ls --format json --alsologtostderr:
I0826 03:45:34.551815    2534 out.go:345] Setting OutFile to fd 1 ...
I0826 03:45:34.551979    2534 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0826 03:45:34.551983    2534 out.go:358] Setting ErrFile to fd 2...
I0826 03:45:34.551986    2534 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0826 03:45:34.552119    2534 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
I0826 03:45:34.552593    2534 config.go:182] Loaded profile config "functional-690000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0826 03:45:34.552652    2534 config.go:182] Loaded profile config "functional-690000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0826 03:45:34.553545    2534 ssh_runner.go:195] Run: systemctl --version
I0826 03:45:34.553554    2534 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/functional-690000/id_rsa Username:docker}
I0826 03:45:34.576363    2534 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-690000 image ls --format yaml --alsologtostderr:
- id: 70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-690000
size: "4780000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 349fbdb5881e162d98e9005784c939e7e93268e52c0833ba8b52dec0f91a2172
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-690000
size: "30"
- id: cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "91500000"
- id: fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "85900000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "94700000"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "57400000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "66000000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-690000 image ls --format yaml --alsologtostderr:
I0826 03:45:34.128219    2527 out.go:345] Setting OutFile to fd 1 ...
I0826 03:45:34.128407    2527 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0826 03:45:34.128411    2527 out.go:358] Setting ErrFile to fd 2...
I0826 03:45:34.128413    2527 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0826 03:45:34.128536    2527 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
I0826 03:45:34.128997    2527 config.go:182] Loaded profile config "functional-690000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0826 03:45:34.129060    2527 config.go:182] Loaded profile config "functional-690000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0826 03:45:34.129893    2527 ssh_runner.go:195] Run: systemctl --version
I0826 03:45:34.129902    2527 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/functional-690000/id_rsa Username:docker}
I0826 03:45:34.157686    2527 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-690000 ssh pgrep buildkitd: exit status 1 (57.759167ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 image build -t localhost/my-image:functional-690000 testdata/build --alsologtostderr
2024/08/26 03:45:34 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:315: (dbg) Done: out/minikube-darwin-arm64 -p functional-690000 image build -t localhost/my-image:functional-690000 testdata/build --alsologtostderr: (1.767315791s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-690000 image build -t localhost/my-image:functional-690000 testdata/build --alsologtostderr:
I0826 03:45:34.260220    2531 out.go:345] Setting OutFile to fd 1 ...
I0826 03:45:34.260436    2531 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0826 03:45:34.260443    2531 out.go:358] Setting ErrFile to fd 2...
I0826 03:45:34.260445    2531 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0826 03:45:34.260588    2531 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19501-1045/.minikube/bin
I0826 03:45:34.261014    2531 config.go:182] Loaded profile config "functional-690000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0826 03:45:34.261873    2531 config.go:182] Loaded profile config "functional-690000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0826 03:45:34.262652    2531 ssh_runner.go:195] Run: systemctl --version
I0826 03:45:34.262660    2531 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19501-1045/.minikube/machines/functional-690000/id_rsa Username:docker}
I0826 03:45:34.284220    2531 build_images.go:161] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.941549766.tar
I0826 03:45:34.284282    2531 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0826 03:45:34.288789    2531 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.941549766.tar
I0826 03:45:34.290304    2531 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.941549766.tar: stat -c "%s %y" /var/lib/minikube/build/build.941549766.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.941549766.tar': No such file or directory
I0826 03:45:34.290320    2531 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.941549766.tar --> /var/lib/minikube/build/build.941549766.tar (3072 bytes)
I0826 03:45:34.304175    2531 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.941549766
I0826 03:45:34.310419    2531 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.941549766 -xf /var/lib/minikube/build/build.941549766.tar
I0826 03:45:34.315155    2531 docker.go:360] Building image: /var/lib/minikube/build/build.941549766
I0826 03:45:34.315239    2531 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-690000 /var/lib/minikube/build/build.941549766
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.3s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:946d137efc3e0a6c4462b274a2f40d5061eacbecefe29318f731dabbb00db6d3 done
#8 naming to localhost/my-image:functional-690000 done
#8 DONE 0.0s
I0826 03:45:35.985700    2531 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-690000 /var/lib/minikube/build/build.941549766: (1.670514792s)
I0826 03:45:35.985765    2531 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.941549766
I0826 03:45:35.989540    2531 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.941549766.tar
I0826 03:45:35.992885    2531 build_images.go:217] Built localhost/my-image:functional-690000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.941549766.tar
I0826 03:45:35.992902    2531 build_images.go:133] succeeded building to: functional-690000
I0826 03:45:35.992905    2531 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.783545834s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-690000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-690000 docker-env) && out/minikube-darwin-arm64 status -p functional-690000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-690000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-690000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-690000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-7mkv9" [812b093a-8e28-4d57-85e9-b90187f90d64] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-7mkv9" [812b093a-8e28-4d57-85e9-b90187f90d64] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.011535916s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 image load --daemon kicbase/echo-server:functional-690000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 image load --daemon kicbase/echo-server:functional-690000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-690000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 image load --daemon kicbase/echo-server:functional-690000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 image save kicbase/echo-server:functional-690000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 image rm kicbase/echo-server:functional-690000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-690000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 image save --daemon kicbase/echo-server:functional-690000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-690000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-690000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-690000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-690000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2341: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-690000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-690000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-690000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [33bf5e81-68af-49e2-bac5-2ff04ad2010d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [33bf5e81-68af-49e2-bac5-2ff04ad2010d] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.00987875s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 service list -o json
functional_test.go:1494: Took "81.695334ms" to run "out/minikube-darwin-arm64 -p functional-690000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.105.4:31975
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.105.4:31975
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-690000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.107.81.108 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-690000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "81.83825ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "33.367209ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "81.567ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "35.497125ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (4.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-690000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1826556862/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1724669119820873000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1826556862/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1724669119820873000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1826556862/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1724669119820873000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1826556862/001/test-1724669119820873000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-690000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (55.6305ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 26 10:45 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 26 10:45 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 26 10:45 test-1724669119820873000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 ssh cat /mount-9p/test-1724669119820873000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-690000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [f0646188-5b5b-432d-a1b9-676ca37f2128] Pending
helpers_test.go:344: "busybox-mount" [f0646188-5b5b-432d-a1b9-676ca37f2128] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [f0646188-5b5b-432d-a1b9-676ca37f2128] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [f0646188-5b5b-432d-a1b9-676ca37f2128] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.005942s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-690000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-690000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1826556862/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (4.98s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-690000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2684084787/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-690000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (59.131042ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-690000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2684084787/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-690000 ssh "sudo umount -f /mount-9p": exit status 1 (62.382ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-690000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-690000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2684084787/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-690000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1704285439/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-690000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1704285439/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-690000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1704285439/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-690000 ssh "findmnt -T" /mount1: exit status 1 (76.759459ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-690000 ssh "findmnt -T" /mount3: exit status 1 (54.230417ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-690000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-690000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-690000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1704285439/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-690000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1704285439/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-690000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1704285439/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.49s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-690000
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-690000
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-690000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (177.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-139000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0826 03:46:05.478544    1539 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/addons-293000/client.crt: no such file or directory" logger="UnhandledError"
E0826 03:48:21.584466    1539 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/addons-293000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-139000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (2m57.429998042s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (177.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-139000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-139000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-139000 -- rollout status deployment/busybox: (2.510968125s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-139000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-139000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-139000 -- exec busybox-7dff88458-chpxh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-139000 -- exec busybox-7dff88458-d8sf8 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-139000 -- exec busybox-7dff88458-sgpsd -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-139000 -- exec busybox-7dff88458-chpxh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-139000 -- exec busybox-7dff88458-d8sf8 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-139000 -- exec busybox-7dff88458-sgpsd -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-139000 -- exec busybox-7dff88458-chpxh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-139000 -- exec busybox-7dff88458-d8sf8 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-139000 -- exec busybox-7dff88458-sgpsd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-139000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-139000 -- exec busybox-7dff88458-chpxh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-139000 -- exec busybox-7dff88458-chpxh -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-139000 -- exec busybox-7dff88458-d8sf8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-139000 -- exec busybox-7dff88458-d8sf8 -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-139000 -- exec busybox-7dff88458-sgpsd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-139000 -- exec busybox-7dff88458-sgpsd -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (58.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-139000 -v=7 --alsologtostderr
E0826 03:48:49.315459    1539 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/addons-293000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-139000 -v=7 --alsologtostderr: (57.7978005s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (58.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-139000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 cp testdata/cp-test.txt ha-139000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 ssh -n ha-139000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 cp ha-139000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile155263332/001/cp-test_ha-139000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 ssh -n ha-139000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 cp ha-139000:/home/docker/cp-test.txt ha-139000-m02:/home/docker/cp-test_ha-139000_ha-139000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 ssh -n ha-139000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 ssh -n ha-139000-m02 "sudo cat /home/docker/cp-test_ha-139000_ha-139000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 cp ha-139000:/home/docker/cp-test.txt ha-139000-m03:/home/docker/cp-test_ha-139000_ha-139000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 ssh -n ha-139000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 ssh -n ha-139000-m03 "sudo cat /home/docker/cp-test_ha-139000_ha-139000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 cp ha-139000:/home/docker/cp-test.txt ha-139000-m04:/home/docker/cp-test_ha-139000_ha-139000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 ssh -n ha-139000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 ssh -n ha-139000-m04 "sudo cat /home/docker/cp-test_ha-139000_ha-139000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 cp testdata/cp-test.txt ha-139000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 ssh -n ha-139000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 cp ha-139000-m02:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile155263332/001/cp-test_ha-139000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 ssh -n ha-139000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 cp ha-139000-m02:/home/docker/cp-test.txt ha-139000:/home/docker/cp-test_ha-139000-m02_ha-139000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 ssh -n ha-139000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 ssh -n ha-139000 "sudo cat /home/docker/cp-test_ha-139000-m02_ha-139000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 cp ha-139000-m02:/home/docker/cp-test.txt ha-139000-m03:/home/docker/cp-test_ha-139000-m02_ha-139000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 ssh -n ha-139000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 ssh -n ha-139000-m03 "sudo cat /home/docker/cp-test_ha-139000-m02_ha-139000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 cp ha-139000-m02:/home/docker/cp-test.txt ha-139000-m04:/home/docker/cp-test_ha-139000-m02_ha-139000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 ssh -n ha-139000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 ssh -n ha-139000-m04 "sudo cat /home/docker/cp-test_ha-139000-m02_ha-139000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 cp testdata/cp-test.txt ha-139000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 ssh -n ha-139000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 cp ha-139000-m03:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile155263332/001/cp-test_ha-139000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 ssh -n ha-139000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 cp ha-139000-m03:/home/docker/cp-test.txt ha-139000:/home/docker/cp-test_ha-139000-m03_ha-139000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 ssh -n ha-139000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 ssh -n ha-139000 "sudo cat /home/docker/cp-test_ha-139000-m03_ha-139000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 cp ha-139000-m03:/home/docker/cp-test.txt ha-139000-m02:/home/docker/cp-test_ha-139000-m03_ha-139000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 ssh -n ha-139000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 ssh -n ha-139000-m02 "sudo cat /home/docker/cp-test_ha-139000-m03_ha-139000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 cp ha-139000-m03:/home/docker/cp-test.txt ha-139000-m04:/home/docker/cp-test_ha-139000-m03_ha-139000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 ssh -n ha-139000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 ssh -n ha-139000-m04 "sudo cat /home/docker/cp-test_ha-139000-m03_ha-139000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 cp testdata/cp-test.txt ha-139000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 ssh -n ha-139000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 cp ha-139000-m04:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile155263332/001/cp-test_ha-139000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 ssh -n ha-139000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 cp ha-139000-m04:/home/docker/cp-test.txt ha-139000:/home/docker/cp-test_ha-139000-m04_ha-139000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 ssh -n ha-139000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 ssh -n ha-139000 "sudo cat /home/docker/cp-test_ha-139000-m04_ha-139000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 cp ha-139000-m04:/home/docker/cp-test.txt ha-139000-m02:/home/docker/cp-test_ha-139000-m04_ha-139000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 ssh -n ha-139000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 ssh -n ha-139000-m02 "sudo cat /home/docker/cp-test_ha-139000-m04_ha-139000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 cp ha-139000-m04:/home/docker/cp-test.txt ha-139000-m03:/home/docker/cp-test_ha-139000-m04_ha-139000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 ssh -n ha-139000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-139000 ssh -n ha-139000-m03 "sudo cat /home/docker/cp-test_ha-139000-m04_ha-139000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (78.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0826 03:59:43.793454    1539 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/functional-690000/client.crt: no such file or directory" logger="UnhandledError"
E0826 03:59:44.764777    1539 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19501-1045/.minikube/profiles/addons-293000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m18.940749417s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (78.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.76s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-638000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-638000 --output=json --user=testUser: (3.75664875s)
--- PASS: TestJSONOutput/stop/Command (3.76s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-726000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-726000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (93.798416ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1cc8b66b-2884-4a88-abce-11cc85ef1858","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-726000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0cc30277-6859-46c6-9732-fe0d57def932","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19501"}}
	{"specversion":"1.0","id":"1862edf2-d6d2-4952-91e2-16eb9345ea8a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig"}}
	{"specversion":"1.0","id":"6066c8d6-bd4f-42e9-ad04-f54e3227a586","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"c6a4b698-2a44-42d4-bf01-e68984641ee2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ab87333e-f4a0-4066-a1aa-d195af9e7fd4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube"}}
	{"specversion":"1.0","id":"775ed6b7-dbde-48d2-9c49-4a7523279550","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1a2e5711-b821-40c9-8eb6-b29d6dd0a5d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-726000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-726000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.09s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.09s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.77s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-743000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-819000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-819000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (109.700625ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-819000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19501
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19501-1045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19501-1045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-819000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-819000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (43.431916ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-819000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-819000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-819000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-819000: (3.238821667s)
--- PASS: TestNoKubernetes/serial/Stop (3.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-819000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-819000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (41.544166ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-819000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-819000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-173000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-173000 --alsologtostderr -v=3: (3.268835584s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-173000 -n old-k8s-version-173000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-173000 -n old-k8s-version-173000: exit status 7 (59.424709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-173000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-993000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-993000 --alsologtostderr -v=3: (3.796146291s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-993000 -n no-preload-993000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-993000 -n no-preload-993000: exit status 7 (59.316875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-993000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (2.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-434000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-434000 --alsologtostderr -v=3: (2.056389875s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (2.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-434000 -n embed-certs-434000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-434000 -n embed-certs-434000: exit status 7 (59.353584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-434000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-727000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-727000 --alsologtostderr -v=3: (3.465442958s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-727000 -n default-k8s-diff-port-727000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-727000 -n default-k8s-diff-port-727000: exit status 7 (57.274333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-727000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-584000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-584000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-584000 --alsologtostderr -v=3: (2.127466791s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-584000 -n newest-cni-584000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-584000 -n newest-cni-584000: exit status 7 (57.58125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-584000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (21/274)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-336000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-336000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-336000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-336000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-336000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-336000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-336000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-336000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-336000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-336000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-336000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-336000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-336000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-336000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-336000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-336000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-336000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-336000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-336000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-336000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-336000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-336000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-336000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-336000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-336000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-336000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-336000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-336000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-336000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-336000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-336000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-336000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                
----------------------- debugLogs end: cilium-336000 [took: 2.308743917s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-336000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-336000
--- SKIP: TestNetworkPlugins/group/cilium (2.42s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-648000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-648000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.10s)

                                                
                                    
Copied to clipboard